Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (57)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9082)

  • How to avoid critical security issues introduced by ffmpeg ? which seems to be a dependency of DeepFace for running opencv in linux environment

    29 août 2022, par Shubham Patel

    I was trying to run DeepFace in docker and when I ran the container I got an error related to OpenCV.

    


    Following online suggestions, I downloaded the FFmpeg package and it resolved the OpenCV error and everything was working fine inside the container.

    


    I ran a docker scan for checking the security issues and it highlighted 4 critical severity issues introduced through FFmpeg package

    


    Can anyone help me how to avoid these security issues ?

    


    Below is the content of the docker file :

    


    
RUN apt-get update
RUN apt-get install ffmpeg  -y

RUN pip install flask flask_cors deepface numpy pillow flask_wtf

WORKDIR /app
COPY . /app

EXPOSE 84
CMD ["python","app.py"]


    


    Below is the result of the docker scan which uses snyk, NOTE : I am just providing High and Critical Severity issues.

    


      Description: Out-of-bounds Write
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-GDKPIXBUF-2960116
  Introduced through: ffmpeg@7:4.3.4-0+deb11u1, gdk-pixbuf/libgdk-pixbuf2.0-bin@2.42.2+dfsg-1, librsvg/librsvg2-common@2.50.3+dfsg-1
  From: ffmpeg@7:4.3.4-0+deb11u1 > ffmpeg/libavcodec58@7:4.3.4-0+deb11u1 > librsvg/librsvg2-2@2.50.3+dfsg-1 > gdk-pixbuf/libgdk-pixbuf-2.0-0@2.42.2+dfsg-1 > gdk-pixbuf/libgdk-pixbuf2.0-common@2.42.2+dfsg-1
  From: gdk-pixbuf/libgdk-pixbuf2.0-bin@2.42.2+dfsg-1 > gdk-pixbuf/libgdk-pixbuf-2.0-0@2.42.2+dfsg-1
  From: librsvg/librsvg2-common@2.50.3+dfsg-1 > gdk-pixbuf/libgdk-pixbuf-2.0-0@2.42.2+dfsg-1
  and 2 more...
  Image layer: 'apt-get install ffmpeg -y'

✗ High severity vulnerability found in aom/libaom0
  Description: Out-of-bounds Write
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-AOM-1085722
  Introduced through: ffmpeg@7:4.3.4-0+deb11u1
  From: ffmpeg@7:4.3.4-0+deb11u1 > ffmpeg/libavcodec58@7:4.3.4-0+deb11u1 > aom/libaom0@1.0.0.errata1-3
  Image layer: 'apt-get install ffmpeg -y'

✗ Critical severity vulnerability found in zlib/zlib1g
  Description: Out-of-bounds Write
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-ZLIB-2976151
  Introduced through: meta-common-packages@meta
  From: meta-common-packages@meta > zlib/zlib1g@1:1.2.11.dfsg-2+deb11u1
  Image layer: Introduced by your base image (python:3.9.13-slim)

✗ Critical severity vulnerability found in aom/libaom0
  Description: Release of Invalid Pointer or Reference
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-AOM-1290331
  Introduced through: ffmpeg@7:4.3.4-0+deb11u1
  From: ffmpeg@7:4.3.4-0+deb11u1 > ffmpeg/libavcodec58@7:4.3.4-0+deb11u1 > aom/libaom0@1.0.0.errata1-3
  Image layer: 'apt-get install ffmpeg -y'

✗ Critical severity vulnerability found in aom/libaom0
  Description: Use After Free
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-AOM-1298721
  Introduced through: ffmpeg@7:4.3.4-0+deb11u1
  From: ffmpeg@7:4.3.4-0+deb11u1 > ffmpeg/libavcodec58@7:4.3.4-0+deb11u1 > aom/libaom0@1.0.0.errata1-3
  Image layer: 'apt-get install ffmpeg -y'

✗ Critical severity vulnerability found in aom/libaom0
  Description: Buffer Overflow
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-AOM-1300249
  Introduced through: ffmpeg@7:4.3.4-0+deb11u1
  From: ffmpeg@7:4.3.4-0+deb11u1 > ffmpeg/libavcodec58@7:4.3.4-0+deb11u1 > aom/libaom0@1.0.0.errata1-3
  Image layer: 'apt-get install ffmpeg -y'



Organization:      16082204
Package manager:   deb
Target file:       Dockerfile
Project name:      docker-image|face-verification-v2
Docker image:      face-verification-v2
Platform:          linux/amd64
Base image:        python:3.9.13-slim
Licenses:          enabled

Tested 314 dependencies for known issues, found 120 issues.

According to our scan, you are currently using the most secure version of the selected base image```


    


  • Getting green screen in ffplay : Streaming desktop (DirectX surface) as H264 video over RTP stream using Live555

    7 novembre 2019, par Ram

    I’m trying to stream the desktop(DirectX surface in NV12 format) as H264 video over RTP stream using Live555 & Windows media foundation’s hardware encoder on Windows10, and expecting it to be rendered by ffplay (ffmpeg 4.2). But only getting a green screen like below,

    enter image description here

    enter image description here

    enter image description here

    enter image description here

    I referred MFWebCamToRTP mediafoundation-sample & Encoding DirectX surface using hardware MFT for implementing live555’s FramedSource and changing the input source to DirectX surface instead of webCam.

    Here is an excerpt of my implementation for Live555’s doGetNextFrame callback to feed input samples from directX surface :

    virtual void doGetNextFrame()
    {
       if (!_isInitialised)
       {
           if (!initialise()) {
               printf("Video device initialisation failed, stopping.");
               return;
           }
           else {
               _isInitialised = true;
           }
       }

       //if (!isCurrentlyAwaitingData()) return;

       DWORD processOutputStatus = 0;
       HRESULT mftProcessOutput = S_OK;
       MFT_OUTPUT_STREAM_INFO StreamInfo;
       IMFMediaBuffer *pBuffer = NULL;
       IMFSample *mftOutSample = NULL;
       DWORD mftOutFlags;
       bool frameSent = false;
       bool bTimeout = false;

       // Create sample
       CComPtr<imfsample> videoSample = NULL;

       // Create buffer
       CComPtr<imfmediabuffer> inputBuffer;
       // Get next event
       CComPtr<imfmediaevent> event;
       HRESULT hr = eventGen->GetEvent(0, &amp;event);
       CHECK_HR(hr, "Failed to get next event");

       MediaEventType eventType;
       hr = event->GetType(&amp;eventType);
       CHECK_HR(hr, "Failed to get event type");


       switch (eventType)
       {
       case METransformNeedInput:
           {
               hr = MFCreateDXGISurfaceBuffer(__uuidof(ID3D11Texture2D), surface, 0, FALSE, &amp;inputBuffer);
               CHECK_HR(hr, "Failed to create IMFMediaBuffer");

               hr = MFCreateSample(&amp;videoSample);
               CHECK_HR(hr, "Failed to create IMFSample");
               hr = videoSample->AddBuffer(inputBuffer);
               CHECK_HR(hr, "Failed to add buffer to IMFSample");

               if (videoSample)
               {
                   _frameCount++;

                   CHECK_HR(videoSample->SetSampleTime(mTimeStamp), "Error setting the video sample time.\n");
                   CHECK_HR(videoSample->SetSampleDuration(VIDEO_FRAME_DURATION), "Error getting video sample duration.\n");

                   // Pass the video sample to the H.264 transform.

                   hr = _pTransform->ProcessInput(inputStreamID, videoSample, 0);
                   CHECK_HR(hr, "The resampler H264 ProcessInput call failed.\n");

                   mTimeStamp += VIDEO_FRAME_DURATION;
               }
           }

           break;

       case METransformHaveOutput:

           {
               CHECK_HR(_pTransform->GetOutputStatus(&amp;mftOutFlags), "H264 MFT GetOutputStatus failed.\n");

               if (mftOutFlags == MFT_OUTPUT_STATUS_SAMPLE_READY)
               {
                   MFT_OUTPUT_DATA_BUFFER _outputDataBuffer;
                   memset(&amp;_outputDataBuffer, 0, sizeof _outputDataBuffer);
                   _outputDataBuffer.dwStreamID = outputStreamID;
                   _outputDataBuffer.dwStatus = 0;
                   _outputDataBuffer.pEvents = NULL;
                   _outputDataBuffer.pSample = nullptr;

                   mftProcessOutput = _pTransform->ProcessOutput(0, 1, &amp;_outputDataBuffer, &amp;processOutputStatus);

                   if (mftProcessOutput != MF_E_TRANSFORM_NEED_MORE_INPUT)
                   {
                       if (_outputDataBuffer.pSample) {

                           //CHECK_HR(_outputDataBuffer.pSample->SetSampleTime(mTimeStamp), "Error setting MFT sample time.\n");
                           //CHECK_HR(_outputDataBuffer.pSample->SetSampleDuration(VIDEO_FRAME_DURATION), "Error setting MFT sample duration.\n");

                           IMFMediaBuffer *buf = NULL;
                           DWORD bufLength;
                           CHECK_HR(_outputDataBuffer.pSample->ConvertToContiguousBuffer(&amp;buf), "ConvertToContiguousBuffer failed.\n");
                           CHECK_HR(buf->GetCurrentLength(&amp;bufLength), "Get buffer length failed.\n");
                           BYTE * rawBuffer = NULL;

                           fFrameSize = bufLength;
                           fDurationInMicroseconds = 0;
                           gettimeofday(&amp;fPresentationTime, NULL);

                           buf->Lock(&amp;rawBuffer, NULL, NULL);
                           memmove(fTo, rawBuffer, fFrameSize);

                           FramedSource::afterGetting(this);

                           buf->Unlock();
                           SafeRelease(&amp;buf);

                           frameSent = true;
                           _lastSendAt = GetTickCount();

                           _outputDataBuffer.pSample->Release();
                       }

                       if (_outputDataBuffer.pEvents)
                           _outputDataBuffer.pEvents->Release();
                   }

                   //SafeRelease(&amp;pBuffer);
                   //SafeRelease(&amp;mftOutSample);

                   break;
               }
           }

           break;
       }

       if (!frameSent)
       {
           envir().taskScheduler().triggerEvent(eventTriggerId, this);
       }

       return;

    done:

       printf("MediaFoundationH264LiveSource doGetNextFrame failed.\n");
       envir().taskScheduler().triggerEvent(eventTriggerId, this);
    }
    </imfmediaevent></imfmediabuffer></imfsample>

    Initialise method :

    bool initialise()
    {
       HRESULT hr;
       D3D11_TEXTURE2D_DESC desc = { 0 };

       HDESK CurrentDesktop = nullptr;
       CurrentDesktop = OpenInputDesktop(0, FALSE, GENERIC_ALL);
       if (!CurrentDesktop)
       {
           // We do not have access to the desktop so request a retry
           return false;
       }

       // Attach desktop to this thread
       bool DesktopAttached = SetThreadDesktop(CurrentDesktop) != 0;
       CloseDesktop(CurrentDesktop);
       CurrentDesktop = nullptr;
       if (!DesktopAttached)
       {
           printf("SetThreadDesktop failed\n");
       }

       UINT32 activateCount = 0;

       // h264 output
       MFT_REGISTER_TYPE_INFO info = { MFMediaType_Video, MFVideoFormat_H264 };

       UINT32 flags =
           MFT_ENUM_FLAG_HARDWARE |
           MFT_ENUM_FLAG_SORTANDFILTER;

       // ------------------------------------------------------------------------
       // Initialize D3D11
       // ------------------------------------------------------------------------

       // Driver types supported
       D3D_DRIVER_TYPE DriverTypes[] =
       {
           D3D_DRIVER_TYPE_HARDWARE,
           D3D_DRIVER_TYPE_WARP,
           D3D_DRIVER_TYPE_REFERENCE,
       };
       UINT NumDriverTypes = ARRAYSIZE(DriverTypes);

       // Feature levels supported
       D3D_FEATURE_LEVEL FeatureLevels[] =
       {
           D3D_FEATURE_LEVEL_11_0,
           D3D_FEATURE_LEVEL_10_1,
           D3D_FEATURE_LEVEL_10_0,
           D3D_FEATURE_LEVEL_9_1
       };
       UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);

       D3D_FEATURE_LEVEL FeatureLevel;

       // Create device
       for (UINT DriverTypeIndex = 0; DriverTypeIndex &lt; NumDriverTypes; ++DriverTypeIndex)
       {
           hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr,
               D3D11_CREATE_DEVICE_VIDEO_SUPPORT,
               FeatureLevels, NumFeatureLevels, D3D11_SDK_VERSION, &amp;device, &amp;FeatureLevel, &amp;context);
           if (SUCCEEDED(hr))
           {
               // Device creation success, no need to loop anymore
               break;
           }
       }

       CHECK_HR(hr, "Failed to create device");

       // Create device manager
       UINT resetToken;
       hr = MFCreateDXGIDeviceManager(&amp;resetToken, &amp;deviceManager);
       CHECK_HR(hr, "Failed to create DXGIDeviceManager");

       hr = deviceManager->ResetDevice(device, resetToken);
       CHECK_HR(hr, "Failed to assign D3D device to device manager");


       // ------------------------------------------------------------------------
       // Create surface
       // ------------------------------------------------------------------------
       desc.Format = DXGI_FORMAT_NV12;
       desc.Width = surfaceWidth;
       desc.Height = surfaceHeight;
       desc.MipLevels = 1;
       desc.ArraySize = 1;
       desc.SampleDesc.Count = 1;

       hr = device->CreateTexture2D(&amp;desc, NULL, &amp;surface);
       CHECK_HR(hr, "Could not create surface");

       hr = MFTEnumEx(
           MFT_CATEGORY_VIDEO_ENCODER,
           flags,
           NULL,
           &amp;info,
           &amp;activateRaw,
           &amp;activateCount
       );
       CHECK_HR(hr, "Failed to enumerate MFTs");

       CHECK(activateCount, "No MFTs found");

       // Choose the first available encoder
       activate = activateRaw[0];

       for (UINT32 i = 0; i &lt; activateCount; i++)
           activateRaw[i]->Release();

       // Activate
       hr = activate->ActivateObject(IID_PPV_ARGS(&amp;_pTransform));
       CHECK_HR(hr, "Failed to activate MFT");

       // Get attributes
       hr = _pTransform->GetAttributes(&amp;attributes);
       CHECK_HR(hr, "Failed to get MFT attributes");

       // Unlock the transform for async use and get event generator
       hr = attributes->SetUINT32(MF_TRANSFORM_ASYNC_UNLOCK, TRUE);
       CHECK_HR(hr, "Failed to unlock MFT");

       eventGen = _pTransform;
       CHECK(eventGen, "Failed to QI for event generator");

       // Get stream IDs (expect 1 input and 1 output stream)
       hr = _pTransform->GetStreamIDs(1, &amp;inputStreamID, 1, &amp;outputStreamID);
       if (hr == E_NOTIMPL)
       {
           inputStreamID = 0;
           outputStreamID = 0;
           hr = S_OK;
       }
       CHECK_HR(hr, "Failed to get stream IDs");

        // ------------------------------------------------------------------------
       // Configure hardware encoder MFT
      // ------------------------------------------------------------------------
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, reinterpret_cast(deviceManager.p)), "Failed to set device manager.\n");

       // Set low latency hint
       hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
       CHECK_HR(hr, "Failed to set MF_LOW_LATENCY");

       hr = MFCreateMediaType(&amp;outputType);
       CHECK_HR(hr, "Failed to create media type");

       hr = outputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
       CHECK_HR(hr, "Failed to set MF_MT_MAJOR_TYPE on H264 output media type");

       hr = outputType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264);
       CHECK_HR(hr, "Failed to set MF_MT_SUBTYPE on H264 output media type");

       hr = outputType->SetUINT32(MF_MT_AVG_BITRATE, TARGET_AVERAGE_BIT_RATE);
       CHECK_HR(hr, "Failed to set average bit rate on H264 output media type");

       hr = MFSetAttributeSize(outputType, MF_MT_FRAME_SIZE, desc.Width, desc.Height);
       CHECK_HR(hr, "Failed to set frame size on H264 MFT out type");

       hr = MFSetAttributeRatio(outputType, MF_MT_FRAME_RATE, TARGET_FRAME_RATE, 1);
       CHECK_HR(hr, "Failed to set frame rate on H264 MFT out type");

       hr = outputType->SetUINT32(MF_MT_INTERLACE_MODE, 2);
       CHECK_HR(hr, "Failed to set MF_MT_INTERLACE_MODE on H.264 encoder MFT");

       hr = outputType->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE);
       CHECK_HR(hr, "Failed to set MF_MT_ALL_SAMPLES_INDEPENDENT on H.264 encoder MFT");

       hr = _pTransform->SetOutputType(outputStreamID, outputType, 0);
       CHECK_HR(hr, "Failed to set output media type on H.264 encoder MFT");

       hr = MFCreateMediaType(&amp;inputType);
       CHECK_HR(hr, "Failed to create media type");

       for (DWORD i = 0;; i++)
       {
           inputType = nullptr;
           hr = _pTransform->GetInputAvailableType(inputStreamID, i, &amp;inputType);
           CHECK_HR(hr, "Failed to get input type");

           hr = inputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
           CHECK_HR(hr, "Failed to set MF_MT_MAJOR_TYPE on H264 MFT input type");

           hr = inputType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_NV12);
           CHECK_HR(hr, "Failed to set MF_MT_SUBTYPE on H264 MFT input type");

           hr = MFSetAttributeSize(inputType, MF_MT_FRAME_SIZE, desc.Width, desc.Height);
           CHECK_HR(hr, "Failed to set MF_MT_FRAME_SIZE on H264 MFT input type");

           hr = MFSetAttributeRatio(inputType, MF_MT_FRAME_RATE, TARGET_FRAME_RATE, 1);
           CHECK_HR(hr, "Failed to set MF_MT_FRAME_RATE on H264 MFT input type");

           hr = _pTransform->SetInputType(inputStreamID, inputType, 0);
           CHECK_HR(hr, "Failed to set input type");

           break;
       }

       CheckHardwareSupport();

       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL), "Failed to process FLUSH command on H.264 MFT.\n");
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, NULL), "Failed to process BEGIN_STREAMING command on H.264 MFT.\n");
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, NULL), "Failed to process START_OF_STREAM command on H.264 MFT.\n");

       return true;

    done:

       printf("MediaFoundationH264LiveSource initialisation failed.\n");
       return false;
    }


       HRESULT CheckHardwareSupport()
       {
           IMFAttributes *attributes;
           HRESULT hr = _pTransform->GetAttributes(&amp;attributes);
           UINT32 dxva = 0;

           if (SUCCEEDED(hr))
           {
               hr = attributes->GetUINT32(MF_SA_D3D11_AWARE, &amp;dxva);
           }

           if (SUCCEEDED(hr))
           {
               hr = attributes->SetUINT32(CODECAPI_AVDecVideoAcceleration_H264, TRUE);
           }

    #if defined(CODECAPI_AVLowLatencyMode) // Win8 only

           hr = _pTransform->QueryInterface(IID_PPV_ARGS(&amp;mpCodecAPI));

           if (SUCCEEDED(hr))
           {
               VARIANT var = { 0 };

               // FIXME: encoder only
               var.vt = VT_UI4;
               var.ulVal = 0;

               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncMPVDefaultBPictureCount, &amp;var);

               var.vt = VT_BOOL;
               var.boolVal = VARIANT_TRUE;
               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonLowLatency, &amp;var);
               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonRealTime, &amp;var);

               hr = attributes->SetUINT32(CODECAPI_AVLowLatencyMode, TRUE);

               if (SUCCEEDED(hr))
               {
                   var.vt = VT_UI4;
                   var.ulVal = eAVEncCommonRateControlMode_Quality;
                   hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonRateControlMode, &amp;var);

                   // This property controls the quality level when the encoder is not using a constrained bit rate. The AVEncCommonRateControlMode property determines whether the bit rate is constrained.
                   VARIANT quality;
                   InitVariantFromUInt32(50, &amp;quality);
                   hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonQuality, &amp;quality);
               }
           }
    #endif

           return hr;
       }

    ffplay command :

    ffplay -protocol_whitelist file,udp,rtp -i test.sdp -x 800 -y 600 -profile:v baseline

    SDP :

    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    t=0 0
    c=IN IP4 127.0.0.1
    m=video 1234 RTP/AVP 96
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1

    I don’t know what am I missing, I have been trying to fix this for almost a week without any progress, and tried almost everything I could. Also, the online resources for encoding a DirectX surface as video are very limited.

    Any help would be appreciated.

  • ffmpeg does not decode some h264 streams

    12 janvier 2016, par Andrey

    I have some ip of cameras on the local network.I receive video stream with live555 library (I took testRtspClient as a basis) and decode frames with ffmpeg (avcodec_decode_video2). Everything perfectly works.
    Problems begin when I try to decode a stream from an internet.

    The first problem - some packets lost, so defects appears. But it’s not a problem. Problem - after stop and start video stream it is necessary to wait for about 5 minutes of streaming before ffmpeg is able to decode something from the same ip camera. If packets doesn’t lost then everithing ok.

    The second problem - there is camera which sends video with resolution 2048х1538. The frame of such resolution is sent by several packets. live555 normally brings together them but when the frame is transferred to the decoder, the decoder returns the packet length, but got frame always 0.

    Here some my code :

    #define RECEIVE_BUFFER_SIZE 1000000
    AVCodecContext* avCodecContext; //definition
    AVFrame *frame;  //definition
    ...
    //init code
    _fReceiveBuffer = new uint8_t[RECEIVE_BUFFER_SIZE+512]; //buffer to receive frame
    ZeroMemory(_fReceiveBuffer, RECEIVE_BUFFER_SIZE + 512); //zeros
    _bufferSize = RECEIVE_BUFFER_SIZE * sizeof(uint8_t); //buffer size

    static const  uint8_t startCode[4] = { 0x00, 0x00, 0x00, 0x01 }; //this is for 0 0 0 1
    //before frame will transfer to decoder
    memcpy(_fReceiveBuffer, (void*)startCode, sizeof(uint8_t)* 4);
    _fReceiveBuffer += sizeof(sizeof(uint8_t)* 4);
    _bufferSize -= sizeof(sizeof(uint8_t)* 4);

    AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_H264); //find codec

    avCodecContext = avcodec_alloc_context3(codec);
    avCodecContext->flags |= AV_PKT_FLAG_KEY;
    avcodec_open2(avCodecContext, codec, NULL);

    frame = av_frame_alloc();

    //frame
    void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
    struct timeval presentationTime, unsigned durationInMicroseconds) {

    if (strcmp(fSubsession.codecName(), "H264") == 0)
    {
       //code from onvif device manager
       static const uint8_t startCode3[] = { 0x00, 0x00, 0x01 };
       static const uint8_t startCode4[] = { 0x00, 0x00, 0x00, 0x01 };
       auto correctedFrameSize = frameSize;
       auto correctedBufferPtr = fPlObj->_fReceiveBuffer;
       if (frameSize &lt; sizeof(startCode4) || memcmp(startCode4, correctedBufferPtr, sizeof(startCode4)) != 0){
           if (frameSize &lt; sizeof(startCode3) || memcmp(startCode3, correctedBufferPtr, sizeof(startCode3)) != 0){
               correctedFrameSize += sizeof(uint8_t)* 4;
               correctedBufferPtr -= sizeof(uint8_t)* 4;
           }
       }

       ProcessFrame(correctedBufferPtr, correctedFrameSize, presentationTime, durationInMicroseconds);
    }
    continuePlaying();
    }

    void DummySink::ProcessFrame(unsigned char* framePtr, int frameSize, struct timeval presentationTime, unsigned duration)    {

    AVPacket avpkt;
    av_init_packet(&amp;avpkt);
    avpkt.data = framePtr;
    avpkt.size = frameSize;
    while (avpkt.size > 0) {
       int got_frame = 0;

       int len = avcodec_decode_video2(avCodecContext, frame, &amp;got_frame, &amp;avpkt);
       if (len &lt; 0) {
           //TODO: log error
           return;
       }
       else if (got_frame == 0)
       {
    //I tried this code, bacause "codecs which have the AV_CODEC_CAP_DELAY capability set have a delay between input and output"
    //but it didn't help
           /*AVPacket emptyPacket;
           av_init_packet(&amp;emptyPacket);
           emptyPacket.data = NULL;
           emptyPacket.size = 0;
           emptyPacket.stream_index = avpkt.stream_index;
           len = avcodec_decode_video2(avCodecContext, frame, &amp;got_frame, &amp;emptyPacket);
           if ( got_frame == 1) goto next;*/
           return;
       }
    next:
       //... here code for view with DirectDraw - everithing ok with it
       avpkt.size -= len;
       avpkt.data += len;
    }
    }

    I alsa tried to send frame to decoder with sps and pps information :

    0 0 0 1 sps 0 0 0 1 pps 0 0 0 1 frame

    but it is not help.

    Interesting that avcodec_decode_video2 does not return frame with second problem (return all size of frame), but width and height in avCodecContext are set correctly. I can’t understart why it doesn’t return frame.

    Can anyone help with these problems ?