Recherche avancée

Médias (91)

Autres articles (89)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (8756)

  • Issues with encoded video in Chrome playback

    4 novembre 2022, par Michael Joseph Aubry

    Chrome is very particular about how a video is encoded. An issue I am facing with a specific type of video is seeked either does not get called at all or at least within a 20s period of time within certain frames.

    


    I can consistently repeat this issue using this muxer https://github.com/redbrain/ytdl-core-muxer/blob/main/index.js

    


    When I paint each frame in puppeteer Lambda there are certain frames where either seeked never gets called or the 20s timeout is called before the frame can resolve.

    


    Here is an example of a log sequence for this particular frame that fails enter image description here

    


    When I re-encode this video, simply by running ffmpeg -i VIDEO_URL then try with the new one there are no issues.

    


    In order to see the differences on a frame level I run ffprobe -show_frames goodvideo.mp4 > good.txt && ffprobe -show_frames badvideo.mp4 > bad.txt and here is what I see with the frames around the 117s mark where the first sign of corruption occurs

    


    Good Frame @ 117s

    


    [FRAME]
media_type=video
stream_index=0
key_frame=0
pts=7020013
pts_time=117.000217
pkt_dts=7020013
pkt_dts_time=117.000217
best_effort_timestamp=7020013
best_effort_timestamp_time=117.000217
pkt_duration=1001
pkt_duration_time=0.016683
pkt_pos=27029408
pkt_size=1741
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=B
coded_picture_number=7014
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=tv
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]


    


    Bad Frame @ 117s

    


    [FRAME]
media_type=video
stream_index=0
key_frame=0
pts=117000
pts_time=117.000000
pkt_dts=117000
pkt_dts_time=117.000000
best_effort_timestamp=117000
best_effort_timestamp_time=117.000000
pkt_duration=16
pkt_duration_time=0.016000
pkt_pos=20592998
pkt_size=18067
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=P
coded_picture_number=7011
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=tv
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]


    


    Does anyone know the differences and why the bad frame is causing my rendering function to have issues seeking ?

    


    This is how I am muxing the bad video, Im trying to avoid re-encoding, but by re-encoding it seems to fix the issue. Are there any settings I can apply to avoid re-encoding while making the video more durable on Chrome ?

    


    // default export: the ffmpeg muxer
const ytmux = (link, options: any = {}) => {
  const result = new stream.PassThrough({
    highWaterMark: options.highWaterMark || 1024 * 512
  });

  ytdl.getInfo(link, options).then((info: videoInfo) => {
    let audioStream: Readable = ytdl.downloadFromInfo(info, {
      ...options,
      quality: "highestaudio"
    });
    let videoStream: Readable = ytdl.downloadFromInfo(info, {
      ...options,
      quality: "highestvideo"
    });
    // create the ffmpeg process for muxing
    let ffmpegProcess: any = cp.spawn(
      ffmpegPath.path,
      [
        // supress non-crucial messages
        "-loglevel",
        "8",
        "-hide_banner",
        // input audio and video by pipe
        "-i",
        "pipe:3",

        "-i",
        "pipe:4",

        // map audio and video correspondingly
        "-map",
        "0:v",
        "-map",
        "1:a",

        // no need to change the codec
        "-c",
        "copy",

        // Allow output to be seekable, which is needed for mp4 output
        "-movflags",
        "frag_keyframe+empty_moov",

        "-fflags",
        "+genpts",

        "-f",
        "matroska",

        "pipe:5"
      ],
      {
        // no popup window for Windows users
        windowsHide: true,
        stdio: [
          // silence stdin/out, forward stderr,
          "inherit",
          "inherit",
          "inherit",
          // and pipe audio, video, output
          "pipe",
          "pipe",
          "pipe"
        ]
      }
    );

    audioStream.pipe(ffmpegProcess.stdio[4]);
    videoStream.pipe(ffmpegProcess.stdio[3]);
    ffmpegProcess.stdio[5].pipe(result);
  });
  return result;
};


    


  • How to adjust mpeg 2 ts start time with ffmpeg ?

    24 juillet 2019, par Maxim Kornienko

    I’m writing simple HLS (Http Live Streaming) java server to live cast (really live, not on demand) screenshow + voice. I constantly get chunks of image frames and audio samples as input to my service and produce mpeg 2 ts files + m3u8 playlist web page as output. The workflow is the following :

    1. Collect (buffer) source video frames and audio for certain period of time
    2. Convert series of video frames to h.264 encoded video file
    3. Convert audio samples to mp3 audio file
    4. Merge them to .ts file with ffmpeg command

      ffmpeg -i audio.mp3 -i video.mp4 -f mpegts -c:a copy -c:v copy -vprofile main -level:v 4.0 -vbsf h264_mp4toannexb -flags -global_header segment.ts
    5. Publish several .ts files on m3u8 playlist.

    The problem is resulting playlist interrupts after first segment is played. VLC logs following error :

    freetype error: Breaking unbreakable line
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 1000 ms)
    core error: ES_OUT_RESET_PCR called
    core error: Could not convert timestamp 185529572000
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 8653 ms ignored)
    core error: Could not get display date for timestamp 0
    core error: Could not convert timestamp 185538017000
    core error: Could not convert timestamp 185538267000
    core error: Could not convert timestamp 185539295977
    ...

    I guess the reason is that start time of segments do not belong to one stream, but it’s impossible to concat and resegment (with ffmepg -f segment) whole stream once new chunk is added. Tried adding #EXT-X-DISCONTINUITY tag to playlist as suggested here but it didn’t help. When I ffprobe them I get :

    Input #0, mpegts, from '26.ts':
    Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s
    Program 1
    Metadata:
     service_name    : Service01
     service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x640, 4 fps, 4 tbr, 90k tbn, 8 tbc
    Stream #0:1[0x101]: Audio: mp3 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/s  

    Where start value in line Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s is more or less equal for all segments.
    When I check segments from available proven-to-work playlists (like http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8) they all have diffrenet start values for each segment, for example :

    Input #0, mpegts, from 'segm150518140104572-424570.ts':
    Duration: 00:00:06.17, start: 65884.808689, bitrate: 479 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 115 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    and the next after it

    Input #0, mpegts, from 'segm150518140104572-424571.ts':
    Duration: 00:00:06.22, start: 65890.814689, bitrate: 468 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 124 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    differ in the way that start time of segm150518140104572-424571.ts is equal to start time + duration of segm150518140104572-424570.ts.

    How could this start value be adjusted with ffmpeg ? Or maybe my whole aproach is wrong ? Unfortunately I couldn’t find on the internet working example of live (not on demand) video service implemented with ffmepg.

  • How to adjust mpeg 2 ts start time with ffmpeg ?

    29 juin 2015, par Maxim Kornienko

    I’m writing simple HLS (Http Live Streaming) java server to live cast (really live, not on demand) screenshow + voice. I constantly get chunks of image frames and audio samples as input to my service and produce mpeg 2 ts files + m3u8 playlist web page as output. The workflow is the following :

    1. Collect (buffer) source video frames and audio for certain period of time
    2. Convert series of video frames to h.264 encoded video file
    3. Convert audio samples to mp3 audio file
    4. Merge them to .ts file with ffmpeg command

      ffmpeg -i audio.mp3 -i video.mp4 -f mpegts -c:a copy -c:v copy -vprofile main -level:v 4.0 -vbsf h264_mp4toannexb -flags -global_header segment.ts
    5. Publish several .ts files on m3u8 playlist.

    The problem is resulting playlist interrupts after first segment is played. VLC logs following error :

    freetype error: Breaking unbreakable line
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 1000 ms)
    core error: ES_OUT_RESET_PCR called
    core error: Could not convert timestamp 185529572000
    ts error: libdvbpsi (PSI decoder): TS discontinuity (received 0, expected 4) for PID 17
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 0
    ts error: libdvbpsi (PSI decoder): TS duplicate (received 0, expected 1) for PID 4096
    core error: ES_OUT_SET_(GROUP_)PCR is called too late (jitter of 8653 ms ignored)
    core error: Could not get display date for timestamp 0
    core error: Could not convert timestamp 185538017000
    core error: Could not convert timestamp 185538267000
    core error: Could not convert timestamp 185539295977
    ...

    I guess the reason is that start time of segments do not belong to one stream, but it’s impossible to concat and resegment (with ffmepg -f segment) whole stream once new chunk is added. Tried adding #EXT-X-DISCONTINUITY tag to playlist as suggested here but it didn’t help. When I ffprobe them I get :

    Input #0, mpegts, from '26.ts':
    Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s
    Program 1
    Metadata:
     service_name    : Service01
     service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x640, 4 fps, 4 tbr, 90k tbn, 8 tbc
    Stream #0:1[0x101]: Audio: mp3 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/s  

    Where start value in line Duration: 00:00:10.02, start: 1.876978, bitrate: 105 kb/s is more or less equal for all segments.
    When I check segments from available proven-to-work playlists (like http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8) they all have diffrenet start values for each segment, for example :

    Input #0, mpegts, from 'segm150518140104572-424570.ts':
    Duration: 00:00:06.17, start: 65884.808689, bitrate: 479 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 115 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    and the next after it

    Input #0, mpegts, from 'segm150518140104572-424571.ts':
    Duration: 00:00:06.22, start: 65890.814689, bitrate: 468 kb/s
    Program 257
    Stream #0:0[0x20]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 320x180 [SAR 1:1 DAR 16:9], 30 fps, 29.97 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x21]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 124 kb/s
    Stream #0:2[0x22]: Data: timed_id3 (ID3  / 0x20334449)

    differ in the way that start time of segm150518140104572-424571.ts is equal to start time + duration of segm150518140104572-424570.ts.

    How could this start value be adjusted with ffmpeg ? Or maybe my whole aproach is wrong ? Unfortunately I couldn’t find on the internet working example of live (not on demand) video service implemented with ffmepg.