Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (19)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (2889)

  • FFMPEG : TS video copied to MP4, missing 3 first frames [closed]

    21 août 2023, par Vasilis Lemonidis

    Running :

    


    ffmpeg -i test.ts -fflags +genpts -c copy -y test.mp4


    


    for this test.ts, which has 30 frames, readable by opencv, I end up with 28 frames, out of which 27 are readable by opencv. More specifically :

    


    ffprobe -v error -select_streams v:0 -count_packets  -show_entries stream=nb_read_packets -of csv=p=0 tmp.ts 


    


    returns 30.

    


    ffprobe -v error -select_streams v:0 -count_packets     -show_entries stream=nb_read_packets -of csv=p=0 tmp.mp4


    


    returns 28.

    


    Using OpenCV in that manner

    


    cap = cv2.VideoCapture(tmp_path)
readMat = []
while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
        readMat.append(frame)


    


    I get for the ts file 30 frames, while for the mp4 27 frames.

    


    Could someone explain why the discrepancies ? I get no error during the transformation from ts to mp4 :

    


    ffmpeg version N-111746-gd53acf452f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 11.3.0 (GCC)
  configuration: --ld=g++ --bindir=/bin --extra-libs='-lpthread -lm' --pkg-config-flags=--static --enable-static --enable-gpl --enable-libaom --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libsvtav1 --enable-libdav1d --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp 
  libavutil      58. 16.101 / 58. 16.101
  libavcodec     60. 23.100 / 60. 23.100
  libavformat    60. 10.100 / 60. 10.100
  libavdevice    60.  2.101 / 60.  2.101
  libavfilter     9. 10.100 /  9. 10.100
  libswscale      7.  3.100 /  7.  3.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
[mpegts @ 0x4237240] DTS discontinuity in stream 0: packet 5 with DTS 306003, packet 6 with DTS 396001
Input #0, mpegts, from 'tmp.ts':
  Duration: 00:00:21.33, start: 3.400000, bitrate: 15 kb/s
  Program 1 
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
  Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 300x300, 1 fps, 3 tbr, 90k tbn
Output #0, mp4, to 'test.mp4':
  Metadata:
    encoder         : Lavf60.10.100
  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 300x300, q=2-31, 1 fps, 3 tbr, 90k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[out#0/mp4 @ 0x423e280] video:25kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.192123%
frame=   30 fps=0.0 q=-1.0 Lsize=      26kB time=00:00:21.00 bitrate=  10.3kbits/s speed=1e+04x 


    


    Additional information

    


    The origin of the video I am processing comes from a continuous stitching operation of still images ts videos, produced by this class update method :

    


    import cv2
import os
import subprocess
from tempfile import NamedTemporaryFile
class VideoUpdater:
    def __init__(
        self, video_path: str, framerate: int, timePerFrame: Optional[int] = None
    ):
        """
        Video updater takes in a video path, and updates it using a supplied frame, based on a given framerate.
        Args:
            video_path: str: Specify the path to the video file
            framerate: int: Set the frame rate of the video
        """
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
            
        }
        self.ffmpeg = "/usr/bin/ffmpeg "

        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None


    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        """
        The writeFrame function takes a frame and writes it to the video file.
        Args:
            frame: np.ndarray: Write the frame to a temporary file
        """


        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            # stdout=subprocess.PIPE,
            # stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()


    


  • TS video copied to MP4, missing 3 first frames when programmatically read (ffmpeg bug)

    3 septembre 2023, par Vasilis Lemonidis

    Running :

    


    ffmpeg -i test.ts -fflags +genpts -c copy -y test.mp4


    


    for this test.ts, which has 30 frames, readable by opencv, I end up with 28 frames, out of which 27 are readable by opencv. More specifically :

    


    ffprobe -v error -select_streams v:0 -count_packets  -show_entries stream=nb_read_packets -of csv=p=0 tmp.ts 


    


    returns 30.

    


    ffprobe -v error -select_streams v:0 -count_packets     -show_entries stream=nb_read_packets -of csv=p=0 tmp.mp4


    


    returns 28.

    


    Using OpenCV in that manner

    


    cap = cv2.VideoCapture(tmp_path)
readMat = []
while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
        readMat.append(frame)


    


    I get for the ts file 30 frames, while for the mp4 27 frames.

    


    Could someone explain why the discrepancies ? I get no error during the transformation from ts to mp4 :

    


    ffmpeg version N-111746-gd53acf452f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 11.3.0 (GCC)
  configuration: --ld=g++ --bindir=/bin --extra-libs='-lpthread -lm' --pkg-config-flags=--static --enable-static --enable-gpl --enable-libaom --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libsvtav1 --enable-libdav1d --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp 
  libavutil      58. 16.101 / 58. 16.101
  libavcodec     60. 23.100 / 60. 23.100
  libavformat    60. 10.100 / 60. 10.100
  libavdevice    60.  2.101 / 60.  2.101
  libavfilter     9. 10.100 /  9. 10.100
  libswscale      7.  3.100 /  7.  3.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
[mpegts @ 0x4237240] DTS discontinuity in stream 0: packet 5 with DTS 306003, packet 6 with DTS 396001
Input #0, mpegts, from 'tmp.ts':
  Duration: 00:00:21.33, start: 3.400000, bitrate: 15 kb/s
  Program 1 
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
  Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 300x300, 1 fps, 3 tbr, 90k tbn
Output #0, mp4, to 'test.mp4':
  Metadata:
    encoder         : Lavf60.10.100
  Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 300x300, q=2-31, 1 fps, 3 tbr, 90k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[out#0/mp4 @ 0x423e280] video:25kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.192123%
frame=   30 fps=0.0 q=-1.0 Lsize=      26kB time=00:00:21.00 bitrate=  10.3kbits/s speed=1e+04x 


    


    Additional information

    


    The origin of the video I am processing comes from a continuous stitching operation of still images ts videos, produced by this class update method :

    


    import cv2
import os
import subprocess
from tempfile import NamedTemporaryFile
class VideoUpdater:
    def __init__(
        self, video_path: str, framerate: int, timePerFrame: Optional[int] = None
    ):
        """
        Video updater takes in a video path, and updates it using a supplied frame, based on a given framerate.
        Args:
            video_path: str: Specify the path to the video file
            framerate: int: Set the frame rate of the video
        """
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
            
        }
        self.ffmpeg = "/usr/bin/ffmpeg "

        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None


    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        """
        The writeFrame function takes a frame and writes it to the video file.
        Args:
            frame: np.ndarray: Write the frame to a temporary file
        """


        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            # stdout=subprocess.PIPE,
            # stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()


    


  • How to write HEVC frames to file using FFMpeg ?

    2 mai 2018, par boneill

    I have the following code used to output H264 frames to an mp4 file and this works fine :

    typedef struct _AvFileStreamContext
    {
       AVStream                 *pStreams[AV_FRAME_MAX];
       AVCodecContext           *streamCodec;
       AVFormatContext          *pAvContext;    // initialised with avformat_alloc_output_context2 for mp4 container

    }_AvFileStreamContext;

    static void WriteImageFrameToFile(const unsigned char * frame,
                                     const int frameSize,
                                     const struct timeval *frameTime,
                                     AvFileStreamContext *pContext,
                                     int keyFrame)
    {
       AVStream *stream pContext->pStreams[AV_FRAME_VIDEO];
       AVPacket pkt;

       av_init_packet(&pkt);
       if (keyFrame)
       {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = stream->index;
       pkt.data = (unsigned char*)frame;
       pkt.size = frameSize;

       int ptsValue = round((float)( ( (frameTime->tv_sec - pContext->firstFrameTime.tv_sec ) * 1000000 +
                                       (frameTime->tv_usec - pContext->firstFrameTime.tv_usec)) * pContext->streamCodec->time_base.den ) / 1000000);

       // Packets PTS/DTS must be in Stream time-base units before writing so
       // rescaling between coder and stream time bases is required.
       pkt.pts =  av_rescale_q(ptsValue, pContext->streamCodec->time_base, stream->time_base);
       pkt.dts =  av_rescale_q(ptsValue, pContext->streamCodec->time_base, stream->time_base);

       av_interleaved_write_frame(pContext->pAvContext, &pkt);
    }

    Once all frames have been received I call the following :

    av_write_trailer(pContext->pAvContext);
    avio_close(pContext->pAvContext->pb);

    The above function is supplied from a circular buffer of frames where each entry in the buffer represents a frame for H264. I am trying to understand how I can adapt this to handle H265/HEVC frames. When I blindly try and use this for H265 frames I end up with an mp4 file where each frame only contains a third of a complete frame i.e.

    h265partialframe
    The video continues to play for the correct duration but each frame is only a third of a complete frame. The implementation for receiving frames is the same as H264 and it is my understanding that with H265 each of the buffers I am receiving represents a ’tile’. In my case these are tile columns of which 3 tiles make up one frame. That said, it would seem that the above function would need to be adapted to combine 3 tiles until an end of frame marker is received. I have trawled the FFMpeg v3.3 documentation to find out how I can achieve this but have had limited luck.
    I have tried to use the following function call to combine frames :

    uint8_t * outputBuffer;
    unsigned int bufferSize;
    stream->parser = av_parser_init (AV_CODEC_ID_HEVC);
    int rc = av_parser_parse2(stream->parser, pContext->pAvContext,
                             &outputBuffer, &bufferSize,
                             frame, frameSize,
                             pkt.pts, pkt.dts, -1);

    It seems that the above call will ultimately call ;

    static int hevc_parse(AVCodecParserContext *s,
                         AVCodecContext *avctx,
                         const uint8_t **poutbuf, int *poutbuf_size,
                         const uint8_t *buf, int buf_size)

    Followed by :

    int ff_combine_frame(ParseContext *pc, int next,
                        const uint8_t **buf, int *buf_size)

    So it seems this is the correct path, however when I plug this all in the resulting mp4 file is not playable under gstreamer with the following errors :

    Prerolling...
    (gst-play-1.0:11225): GStreamer-WARNING **: gstpad.c:4943:store_sticky_event: Sticky event misordering, got 'segment' before 'caps'
    Redistribute latency...

    (gst-play-1.0:11225): GStreamer-WARNING **: gstpad.c:4943:store_sticky_event: Sticky event misordering, got 'segment' before 'caps'

    And I get the following errors (snippet of errors) from VLC where the frames appear to be correct in terms of height and width but are corrupt or incomplete during playback :

    [hevc @ 0x7f8128c30440] PPS changed between slices.
    [hevc @ 0x7f8128c42080] PPS changed between slices.
    [hevc @ 0x7f8128c53ce0] PPS changed between slices.
    [00007f8110293be8] freetype spu text error: Breaking unbreakable line
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] Could not find ref with POC 7
    [hevc @ 0x7f8128c30440] PPS changed between slices.
    [hevc @ 0x7f8128c42080] PPS changed between slices.
    [hevc @ 0x7f8128c53ce0] PPS changed between slices.
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] Could not find ref with POC 15
    [hevc @ 0x7f8128c30440] PPS changed between slices.
    [hevc @ 0x7f8128c42080] PPS changed between slices.
    [hevc @ 0x7f8128c53ce0] PPS changed between slices.
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] First slice in a frame missing.
    [hevc @ 0x7f8128c1e0a0] Could not find ref with POC 23

    Here is an example frame from the VLC playback where you can just about see the outline of someone :
    h265combinedtiles

    It should be noted that when using the call to av_parser_parse2(), a frame is only passed to av_interleaved_write_frame() when outputBuffer is populated which seems to take a number of tiles (greater than 3) so I am possibly not setting something correctly.

    • Can I tell FFMpeg that a particular tile is an end of frame ?
    • Should I be using some other FFMpeg call to combine H265 tiles ?
    • Am I misunderstanding how HEVC operates ? (probably)
    • Once combined it the call to av_interleaved_write_frame() still valid ?
    • Note that use of libx265 is not possible.

    Any help appreciated.