Recherche avancée

Médias (91)

Autres articles (51)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7769)

  • Add subtitles to multiple files at once

    20 mai 2020, par Paul Filipenco

    I have a folder with episodes called ep and a folder with subtitles called sub
    
Each episode has corresponding subtitles and i need to bulk add them with ffmpeg.

    



    I've read that i can add subtitles with the following command :

    



    ffmpeg -i video.avi -vf "ass=subtitle.ass" out.avi


    



    But that only does it one file at a time.
    
Is there a bulk variant ?

    



    Some useful info :
    
ls ep prints

    



    <series> - Ep<episode number="number">.mkv&#xA;</episode></series>

    &#xA;&#xA;

    ls sub prints

    &#xA;&#xA;

    <series> - Ep<episode number="number">.ass&#xA;</episode></series>

    &#xA;

  • Getting green screen in ffplay : Streaming desktop (DirectX surface) as H264 video over RTP stream using Live555

    7 novembre 2019, par Ram

    I’m trying to stream the desktop(DirectX surface in NV12 format) as H264 video over RTP stream using Live555 & Windows media foundation’s hardware encoder on Windows10, and expecting it to be rendered by ffplay (ffmpeg 4.2). But only getting a green screen like below,

    enter image description here

    enter image description here

    enter image description here

    enter image description here

    I referred MFWebCamToRTP mediafoundation-sample & Encoding DirectX surface using hardware MFT for implementing live555’s FramedSource and changing the input source to DirectX surface instead of webCam.

    Here is an excerpt of my implementation for Live555’s doGetNextFrame callback to feed input samples from directX surface :

    virtual void doGetNextFrame()
    {
       if (!_isInitialised)
       {
           if (!initialise()) {
               printf("Video device initialisation failed, stopping.");
               return;
           }
           else {
               _isInitialised = true;
           }
       }

       //if (!isCurrentlyAwaitingData()) return;

       DWORD processOutputStatus = 0;
       HRESULT mftProcessOutput = S_OK;
       MFT_OUTPUT_STREAM_INFO StreamInfo;
       IMFMediaBuffer *pBuffer = NULL;
       IMFSample *mftOutSample = NULL;
       DWORD mftOutFlags;
       bool frameSent = false;
       bool bTimeout = false;

       // Create sample
       CComPtr<imfsample> videoSample = NULL;

       // Create buffer
       CComPtr<imfmediabuffer> inputBuffer;
       // Get next event
       CComPtr<imfmediaevent> event;
       HRESULT hr = eventGen->GetEvent(0, &amp;event);
       CHECK_HR(hr, "Failed to get next event");

       MediaEventType eventType;
       hr = event->GetType(&amp;eventType);
       CHECK_HR(hr, "Failed to get event type");


       switch (eventType)
       {
       case METransformNeedInput:
           {
               hr = MFCreateDXGISurfaceBuffer(__uuidof(ID3D11Texture2D), surface, 0, FALSE, &amp;inputBuffer);
               CHECK_HR(hr, "Failed to create IMFMediaBuffer");

               hr = MFCreateSample(&amp;videoSample);
               CHECK_HR(hr, "Failed to create IMFSample");
               hr = videoSample->AddBuffer(inputBuffer);
               CHECK_HR(hr, "Failed to add buffer to IMFSample");

               if (videoSample)
               {
                   _frameCount++;

                   CHECK_HR(videoSample->SetSampleTime(mTimeStamp), "Error setting the video sample time.\n");
                   CHECK_HR(videoSample->SetSampleDuration(VIDEO_FRAME_DURATION), "Error getting video sample duration.\n");

                   // Pass the video sample to the H.264 transform.

                   hr = _pTransform->ProcessInput(inputStreamID, videoSample, 0);
                   CHECK_HR(hr, "The resampler H264 ProcessInput call failed.\n");

                   mTimeStamp += VIDEO_FRAME_DURATION;
               }
           }

           break;

       case METransformHaveOutput:

           {
               CHECK_HR(_pTransform->GetOutputStatus(&amp;mftOutFlags), "H264 MFT GetOutputStatus failed.\n");

               if (mftOutFlags == MFT_OUTPUT_STATUS_SAMPLE_READY)
               {
                   MFT_OUTPUT_DATA_BUFFER _outputDataBuffer;
                   memset(&amp;_outputDataBuffer, 0, sizeof _outputDataBuffer);
                   _outputDataBuffer.dwStreamID = outputStreamID;
                   _outputDataBuffer.dwStatus = 0;
                   _outputDataBuffer.pEvents = NULL;
                   _outputDataBuffer.pSample = nullptr;

                   mftProcessOutput = _pTransform->ProcessOutput(0, 1, &amp;_outputDataBuffer, &amp;processOutputStatus);

                   if (mftProcessOutput != MF_E_TRANSFORM_NEED_MORE_INPUT)
                   {
                       if (_outputDataBuffer.pSample) {

                           //CHECK_HR(_outputDataBuffer.pSample->SetSampleTime(mTimeStamp), "Error setting MFT sample time.\n");
                           //CHECK_HR(_outputDataBuffer.pSample->SetSampleDuration(VIDEO_FRAME_DURATION), "Error setting MFT sample duration.\n");

                           IMFMediaBuffer *buf = NULL;
                           DWORD bufLength;
                           CHECK_HR(_outputDataBuffer.pSample->ConvertToContiguousBuffer(&amp;buf), "ConvertToContiguousBuffer failed.\n");
                           CHECK_HR(buf->GetCurrentLength(&amp;bufLength), "Get buffer length failed.\n");
                           BYTE * rawBuffer = NULL;

                           fFrameSize = bufLength;
                           fDurationInMicroseconds = 0;
                           gettimeofday(&amp;fPresentationTime, NULL);

                           buf->Lock(&amp;rawBuffer, NULL, NULL);
                           memmove(fTo, rawBuffer, fFrameSize);

                           FramedSource::afterGetting(this);

                           buf->Unlock();
                           SafeRelease(&amp;buf);

                           frameSent = true;
                           _lastSendAt = GetTickCount();

                           _outputDataBuffer.pSample->Release();
                       }

                       if (_outputDataBuffer.pEvents)
                           _outputDataBuffer.pEvents->Release();
                   }

                   //SafeRelease(&amp;pBuffer);
                   //SafeRelease(&amp;mftOutSample);

                   break;
               }
           }

           break;
       }

       if (!frameSent)
       {
           envir().taskScheduler().triggerEvent(eventTriggerId, this);
       }

       return;

    done:

       printf("MediaFoundationH264LiveSource doGetNextFrame failed.\n");
       envir().taskScheduler().triggerEvent(eventTriggerId, this);
    }
    </imfmediaevent></imfmediabuffer></imfsample>

    Initialise method :

    bool initialise()
    {
       HRESULT hr;
       D3D11_TEXTURE2D_DESC desc = { 0 };

       HDESK CurrentDesktop = nullptr;
       CurrentDesktop = OpenInputDesktop(0, FALSE, GENERIC_ALL);
       if (!CurrentDesktop)
       {
           // We do not have access to the desktop so request a retry
           return false;
       }

       // Attach desktop to this thread
       bool DesktopAttached = SetThreadDesktop(CurrentDesktop) != 0;
       CloseDesktop(CurrentDesktop);
       CurrentDesktop = nullptr;
       if (!DesktopAttached)
       {
           printf("SetThreadDesktop failed\n");
       }

       UINT32 activateCount = 0;

       // h264 output
       MFT_REGISTER_TYPE_INFO info = { MFMediaType_Video, MFVideoFormat_H264 };

       UINT32 flags =
           MFT_ENUM_FLAG_HARDWARE |
           MFT_ENUM_FLAG_SORTANDFILTER;

       // ------------------------------------------------------------------------
       // Initialize D3D11
       // ------------------------------------------------------------------------

       // Driver types supported
       D3D_DRIVER_TYPE DriverTypes[] =
       {
           D3D_DRIVER_TYPE_HARDWARE,
           D3D_DRIVER_TYPE_WARP,
           D3D_DRIVER_TYPE_REFERENCE,
       };
       UINT NumDriverTypes = ARRAYSIZE(DriverTypes);

       // Feature levels supported
       D3D_FEATURE_LEVEL FeatureLevels[] =
       {
           D3D_FEATURE_LEVEL_11_0,
           D3D_FEATURE_LEVEL_10_1,
           D3D_FEATURE_LEVEL_10_0,
           D3D_FEATURE_LEVEL_9_1
       };
       UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);

       D3D_FEATURE_LEVEL FeatureLevel;

       // Create device
       for (UINT DriverTypeIndex = 0; DriverTypeIndex &lt; NumDriverTypes; ++DriverTypeIndex)
       {
           hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr,
               D3D11_CREATE_DEVICE_VIDEO_SUPPORT,
               FeatureLevels, NumFeatureLevels, D3D11_SDK_VERSION, &amp;device, &amp;FeatureLevel, &amp;context);
           if (SUCCEEDED(hr))
           {
               // Device creation success, no need to loop anymore
               break;
           }
       }

       CHECK_HR(hr, "Failed to create device");

       // Create device manager
       UINT resetToken;
       hr = MFCreateDXGIDeviceManager(&amp;resetToken, &amp;deviceManager);
       CHECK_HR(hr, "Failed to create DXGIDeviceManager");

       hr = deviceManager->ResetDevice(device, resetToken);
       CHECK_HR(hr, "Failed to assign D3D device to device manager");


       // ------------------------------------------------------------------------
       // Create surface
       // ------------------------------------------------------------------------
       desc.Format = DXGI_FORMAT_NV12;
       desc.Width = surfaceWidth;
       desc.Height = surfaceHeight;
       desc.MipLevels = 1;
       desc.ArraySize = 1;
       desc.SampleDesc.Count = 1;

       hr = device->CreateTexture2D(&amp;desc, NULL, &amp;surface);
       CHECK_HR(hr, "Could not create surface");

       hr = MFTEnumEx(
           MFT_CATEGORY_VIDEO_ENCODER,
           flags,
           NULL,
           &amp;info,
           &amp;activateRaw,
           &amp;activateCount
       );
       CHECK_HR(hr, "Failed to enumerate MFTs");

       CHECK(activateCount, "No MFTs found");

       // Choose the first available encoder
       activate = activateRaw[0];

       for (UINT32 i = 0; i &lt; activateCount; i++)
           activateRaw[i]->Release();

       // Activate
       hr = activate->ActivateObject(IID_PPV_ARGS(&amp;_pTransform));
       CHECK_HR(hr, "Failed to activate MFT");

       // Get attributes
       hr = _pTransform->GetAttributes(&amp;attributes);
       CHECK_HR(hr, "Failed to get MFT attributes");

       // Unlock the transform for async use and get event generator
       hr = attributes->SetUINT32(MF_TRANSFORM_ASYNC_UNLOCK, TRUE);
       CHECK_HR(hr, "Failed to unlock MFT");

       eventGen = _pTransform;
       CHECK(eventGen, "Failed to QI for event generator");

       // Get stream IDs (expect 1 input and 1 output stream)
       hr = _pTransform->GetStreamIDs(1, &amp;inputStreamID, 1, &amp;outputStreamID);
       if (hr == E_NOTIMPL)
       {
           inputStreamID = 0;
           outputStreamID = 0;
           hr = S_OK;
       }
       CHECK_HR(hr, "Failed to get stream IDs");

        // ------------------------------------------------------------------------
       // Configure hardware encoder MFT
      // ------------------------------------------------------------------------
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, reinterpret_cast(deviceManager.p)), "Failed to set device manager.\n");

       // Set low latency hint
       hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
       CHECK_HR(hr, "Failed to set MF_LOW_LATENCY");

       hr = MFCreateMediaType(&amp;outputType);
       CHECK_HR(hr, "Failed to create media type");

       hr = outputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
       CHECK_HR(hr, "Failed to set MF_MT_MAJOR_TYPE on H264 output media type");

       hr = outputType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264);
       CHECK_HR(hr, "Failed to set MF_MT_SUBTYPE on H264 output media type");

       hr = outputType->SetUINT32(MF_MT_AVG_BITRATE, TARGET_AVERAGE_BIT_RATE);
       CHECK_HR(hr, "Failed to set average bit rate on H264 output media type");

       hr = MFSetAttributeSize(outputType, MF_MT_FRAME_SIZE, desc.Width, desc.Height);
       CHECK_HR(hr, "Failed to set frame size on H264 MFT out type");

       hr = MFSetAttributeRatio(outputType, MF_MT_FRAME_RATE, TARGET_FRAME_RATE, 1);
       CHECK_HR(hr, "Failed to set frame rate on H264 MFT out type");

       hr = outputType->SetUINT32(MF_MT_INTERLACE_MODE, 2);
       CHECK_HR(hr, "Failed to set MF_MT_INTERLACE_MODE on H.264 encoder MFT");

       hr = outputType->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE);
       CHECK_HR(hr, "Failed to set MF_MT_ALL_SAMPLES_INDEPENDENT on H.264 encoder MFT");

       hr = _pTransform->SetOutputType(outputStreamID, outputType, 0);
       CHECK_HR(hr, "Failed to set output media type on H.264 encoder MFT");

       hr = MFCreateMediaType(&amp;inputType);
       CHECK_HR(hr, "Failed to create media type");

       for (DWORD i = 0;; i++)
       {
           inputType = nullptr;
           hr = _pTransform->GetInputAvailableType(inputStreamID, i, &amp;inputType);
           CHECK_HR(hr, "Failed to get input type");

           hr = inputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
           CHECK_HR(hr, "Failed to set MF_MT_MAJOR_TYPE on H264 MFT input type");

           hr = inputType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_NV12);
           CHECK_HR(hr, "Failed to set MF_MT_SUBTYPE on H264 MFT input type");

           hr = MFSetAttributeSize(inputType, MF_MT_FRAME_SIZE, desc.Width, desc.Height);
           CHECK_HR(hr, "Failed to set MF_MT_FRAME_SIZE on H264 MFT input type");

           hr = MFSetAttributeRatio(inputType, MF_MT_FRAME_RATE, TARGET_FRAME_RATE, 1);
           CHECK_HR(hr, "Failed to set MF_MT_FRAME_RATE on H264 MFT input type");

           hr = _pTransform->SetInputType(inputStreamID, inputType, 0);
           CHECK_HR(hr, "Failed to set input type");

           break;
       }

       CheckHardwareSupport();

       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL), "Failed to process FLUSH command on H.264 MFT.\n");
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, NULL), "Failed to process BEGIN_STREAMING command on H.264 MFT.\n");
       CHECK_HR(_pTransform->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, NULL), "Failed to process START_OF_STREAM command on H.264 MFT.\n");

       return true;

    done:

       printf("MediaFoundationH264LiveSource initialisation failed.\n");
       return false;
    }


       HRESULT CheckHardwareSupport()
       {
           IMFAttributes *attributes;
           HRESULT hr = _pTransform->GetAttributes(&amp;attributes);
           UINT32 dxva = 0;

           if (SUCCEEDED(hr))
           {
               hr = attributes->GetUINT32(MF_SA_D3D11_AWARE, &amp;dxva);
           }

           if (SUCCEEDED(hr))
           {
               hr = attributes->SetUINT32(CODECAPI_AVDecVideoAcceleration_H264, TRUE);
           }

    #if defined(CODECAPI_AVLowLatencyMode) // Win8 only

           hr = _pTransform->QueryInterface(IID_PPV_ARGS(&amp;mpCodecAPI));

           if (SUCCEEDED(hr))
           {
               VARIANT var = { 0 };

               // FIXME: encoder only
               var.vt = VT_UI4;
               var.ulVal = 0;

               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncMPVDefaultBPictureCount, &amp;var);

               var.vt = VT_BOOL;
               var.boolVal = VARIANT_TRUE;
               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonLowLatency, &amp;var);
               hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonRealTime, &amp;var);

               hr = attributes->SetUINT32(CODECAPI_AVLowLatencyMode, TRUE);

               if (SUCCEEDED(hr))
               {
                   var.vt = VT_UI4;
                   var.ulVal = eAVEncCommonRateControlMode_Quality;
                   hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonRateControlMode, &amp;var);

                   // This property controls the quality level when the encoder is not using a constrained bit rate. The AVEncCommonRateControlMode property determines whether the bit rate is constrained.
                   VARIANT quality;
                   InitVariantFromUInt32(50, &amp;quality);
                   hr = mpCodecAPI->SetValue(&amp;CODECAPI_AVEncCommonQuality, &amp;quality);
               }
           }
    #endif

           return hr;
       }

    ffplay command :

    ffplay -protocol_whitelist file,udp,rtp -i test.sdp -x 800 -y 600 -profile:v baseline

    SDP :

    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    t=0 0
    c=IN IP4 127.0.0.1
    m=video 1234 RTP/AVP 96
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1

    I don’t know what am I missing, I have been trying to fix this for almost a week without any progress, and tried almost everything I could. Also, the online resources for encoding a DirectX surface as video are very limited.

    Any help would be appreciated.

  • hls error after change in ts file with ffmpeg

    9 août 2019, par Mohsen Rahnamaei

    I want to manipulate ts file separately with ffmpeg command, for example, I want to write something in each ts file in ordet to do that first i get m3u8 and other ts file with ffmpeg command and after that i can play it correctly
    after that i am trying to write something o all ts file with this command

    ffmpeg -i stream0.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream0.ts
    ffmpeg -i stream1.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream1.ts
    ffmpeg -i stream2.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream2.ts
    ffmpeg -i stream3.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream3.ts
    ffmpeg -i stream4.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream4.ts
    ffmpeg -i stream5.ts -vf  "drawbox=y=ih/PHI:color=black@0.4:width=iw:height=48:t=fill,  drawtext=text='Title of this Video':fontcolor=white" -c:v libx264 -c:a copy    ../video6/stream5.ts
    ...

    and then iam trying to play these new ts file withe old m3u8
    but i get this error in console and hls player get stuck

         Dropping 1 audio frame @ 6.037s due to 6037 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 6015 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5994 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5973 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5952 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5930 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5909 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5888 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5866 ms overlap.
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remuxAudio @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.remux @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.append @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.pushDecrypted @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     e.push @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     (anonymous) @ blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1
     blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to 5845 ms overlap.



           log] > AVC:4755 ms overlapping between fragments detected
       blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [log] > Video/PTS/DTS adjusted: 4755/4755,delta:-4755 ms
       hls.js@latest:1 [log] > Parsed video,PTS:[4.755,5.881],DTS:[4.755/5.797],nb:141,dropped:0
       hls.js@latest:1 [warn] > negative duration computed for frag 3,level 0, there should be some duration drift between playlist and fragment!
       (anonymous) @ hls.js@latest:1
       a @ hls.js@latest:1
       n @ hls.js@latest:1
       r.onFragParsingData @ hls.js@latest:1
       e.onEventGeneric @ hls.js@latest:1
       e.onEvent @ hls.js@latest:1
       l.emit @ hls.js@latest:1
       t.trigger @ hls.js@latest:1
       t.onWorkerMessage @ hls.js@latest:1
       hls.js@latest:1 [warn] > negative duration computed for frag 1,level 0, there should be some duration drift between playlist and fragment!
       (anonymous) @ hls.js@latest:1
       a @ hls.js@latest:1
       n @ hls.js@latest:1
       r.onFragParsingData @ hls.js@latest:1
       e.onEventGeneric @ hls.js@latest:1
       e.onEvent @ hls.js@latest:1
       l.emit @ hls.js@latest:1
       t.trigger @ hls.js@latest:1
       t.onWorkerMessage @ hls.js@latest:1
       hls.js@latest:1 [log] > main stream:PARSING->PARSED
       hls.js@latest:1 [log] > main buffered : [0.000,6.006]
       hls.js@latest:1 [log] > latency/loading/parsing/append/kbps:20/7/83/5/189565
       hls.js@latest:1 [log] > main stream:PARSED->IDLE
       hls.js@latest:1 [log] > Loading 5 of [0 ,5],level 0, currentTime:0.347,bufferEnd:6.006
       hls.js@latest:1 [log] > main stream:IDLE->FRAG_LOADING
       hls.js@latest:1 [log] > Loaded 5 of [0 ,5],level 0
       hls.js@latest:1 [log] > Parsing 5 of [0 ,5],level 0, cc 0
       hls.js@latest:1 [log] > main stream:FRAG_LOADING->PARSING
       blob:null/d1ca8bf8-cdfc-4b21-bd28-0d0871f7d4bd:1 [warn] > Dropping 1 audio frame @ 6.037s due to

    and after that, I try to analyse new and old with : https://github.com/epiclabs-io/hls-analyzer
    and i get this for a correct and old version of my process :

    ***** Analyzing variant (0) *****

    ** Generic information **
    Version: 3
    Start Media sequence: 0
    Is Live: False
    Encrypted: False
    Number of segments: 6
    Playlist duration: 30.073711

    ** Downloading http://localhost:8182/streams/stream0.ts, Range: None **
    ** Tracks and Media formats **
    Track #0 - Type: video/avc, Format: Video (H.264) - Profile: High, Level: 0, Resolution: 1920x1080, Encoded aspect ratio: 1/1, Display aspect ratio: 16/9
    Track #1 - Type: audio/mp4a-latm, Format: Audio (AAC) - Sample Rate: 48000, Channels: 6

    ** Timing information **
    Segment declared duration: 6.089822
    Track #0 - Duration: 1.126123 s, First PTS: 6.321588 s, Last PTS: 7.447711 s
    Track #1 - Duration: 1.216332 s, First PTS: 6.228255 s, Last PTS: 7.444587 s
    Duration difference (declared vs real): 4.963699s (440.78%)

    ** Frames **
    Track #0 - Frames:  I P B B B P B B B P B B B P P P P P P P P P P P P P B B     AA: 0, BB: 0

       Good! Track starts with a keyframe
       Keyframes count: 1

    Track #1 - Frames:  I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I

    and this for new ts file :

    ***** Analyzing variant (0) *****

    ** Generic information **
    Version: 3
    Start Media sequence: 0
    Is Live: False
    Encrypted: False
    Number of segments: 6
    Playlist duration: 30.073711

    ** Downloading http://localhost:8182/streams/stream0.ts, Range: None **
    ** Tracks and Media formats **
    Track #0 - Type: video/avc, Format: Video (H.264) - Profile: 0, Level: 0, Resolution: 0x0, Encoded aspect ratio: 1/1, Display aspect ratio: 1
    Track #1 - Type: audio/mp4a-latm, Format: Audio (AAC) - Sample Rate: 48000, Channels: 6

    ** Timing information **
    Segment declared duration: 6.089822
    Track #0 - Duration: 0.0 s, First PTS: 7.406011 s, Last PTS: 7.406011 s
    Track #1 - Duration: 0.063999 s, First PTS: 7.392922 s, Last PTS: 7.456921 s
    Duration difference (declared vs real): 6.025823s (9415.50%)

    ** Frames **
    Track #0 - Frames:      AA: 0, BB: 0

       Keyframes count: 0
       Warning: there are no keyframes in this track! This will cause a bad playback experience

    Track #1 - Frames:  I I I I I I

    what should I do ?
    my final target is to write something in ts file and see the correct effect of that on hls playing