Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (10)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (5431)

  • FFMPEG Audio/video out of sync after cutting and concatonating even after transcoding

    4 mai 2020, par Ham789

    I am attempting to take cuts from a set of videos and concatonate them together with the concat demuxer.

    



    However, the audio is out of sync of the video in the output. The audio seems to drift further out of sync as the video progresses. Interestingly, if I click to seek another time in the video with the progress bar on the player, the audio becomes synced up with the video but then gradually drifts out of sync again. Seeking to a new time in the player seems to reset the audio/video. It is like they are being played back at different rates or something. I get this behaviour in both Quicktime and VLC players.

    



    For each video, I decode it, trim a clip from it and then encode it to 4k resolution at 25 fps with its audio :

    



    ffmpeg -ss 0.5 -t 0.5 -i input_video1.mp4 -r 25 -vf scale=3840:2160 output_video1.mp4

    



    I then take each of these videos and concatonate them together with the concat demuxer :

    



    ffmpeg -f concat -safe 0 -i cut_videos.txt -c copy -y output.mp4

    



    I am taking short cuts of each video (approximately 0.5s)

    



    I am using Python's subprocess to automate the cutting and concatonating of the videos.

    



    I am not sure if this happens because of the trimming or concatenation steps but when I play back the intermediate cut video files (output_video1.mp4 in the above command), there seems to be some silence before the audio comes in at the start of the video.

    



    When I concatonate the videos, I sometimes get a lot of these warnings however the audio still becomes out of sync even when I do not get them :

    



    [mp4 @ 0000021a252ce080] Non-monotonous DTS in output stream 0:1; previous: 51792, current: 50009; changing to 51793. This may result in incorrect timestamps in the output file.

    



    From this post, it seems to be a problem with cutting the videos and their timestamps. The solution proposed in the post is to decode, cut and then encode the video however I am already doing that.

    



    How can I ensure the audio and video are in sync ? Am I transcoding incorrectly ? This seems to be the only solution I can find online however it does not seem to work.

    



    UPDATE :

    



    I took inspiration from this post and seperated the audio and video from output_video1.mp4 using :

    



    ffmpeg -i output_video1.mp4 -acodec copy -vn video.mp4

    



    and

    



    ffmpeg -i output_video1.mp4 -vcodec copy -an audio.mp4

    



    I then compared the durations of video.mp4 and audio.mp4 and got 0.57s and 0.52s respectively. Since the video is longer, this explains why there is a period of silence in the videos. The post then suggests transcoding is the solution however as you can see from the code above that does not work for me.

    



    Sample Output Log for the Trim Command

    



      built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input_video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:04.06, start: 0.000000, bitrate: 14266 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 3840x2160, 14268 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
    Metadata:
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
File 'output_video1.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fcae4001e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcae4001e00] profile High, level 5.1
[libx264 @ 0x7fcae4001e00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output_video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 3840x2160, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
    Metadata:
      handler_name    : Core Media Video
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
      encoder         : Lavc58.54.100 aac
frame=   14 fps=7.0 q=-1.0 Lsize=     928kB time=00:00:00.51 bitrate=14884.2kbits/s dup=0 drop=1 speed=0.255x    
video:922kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.194501%
[libx264 @ 0x7fcae4001e00] frame I:1     Avg QP:21.06  size:228519
[libx264 @ 0x7fcae4001e00] frame P:4     Avg QP:22.03  size: 85228
[libx264 @ 0x7fcae4001e00] frame B:9     Avg QP:22.88  size: 41537
[libx264 @ 0x7fcae4001e00] consecutive B-frames: 14.3%  0.0%  0.0% 85.7%
[libx264 @ 0x7fcae4001e00] mb I  I16..4: 27.6% 64.3%  8.1%
[libx264 @ 0x7fcae4001e00] mb P  I16..4:  9.1% 10.7%  0.2%  P16..4: 48.5%  7.3%  3.9%  0.0%  0.0%    skip:20.2%
[libx264 @ 0x7fcae4001e00] mb B  I16..4:  1.1%  1.0%  0.0%  B16..8: 44.5%  2.9%  0.2%  direct: 8.3%  skip:42.0%  L0:45.6% L1:53.2% BI: 1.2%
[libx264 @ 0x7fcae4001e00] 8x8 transform intra:58.2% inter:93.4%
[libx264 @ 0x7fcae4001e00] coded y,uvDC,uvAC intra: 31.4% 62.2% 5.2% inter: 11.4% 30.9% 0.0%
[libx264 @ 0x7fcae4001e00] i16 v,h,dc,p: 15% 52% 12% 21%
[libx264 @ 0x7fcae4001e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 33% 32%  2%  2%  2%  4%  2%  4%
[libx264 @ 0x7fcae4001e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 39%  9%  3%  4%  4% 12%  3%  4%
[libx264 @ 0x7fcae4001e00] i8c dc,h,v,p: 43% 36% 18%  3%
[libx264 @ 0x7fcae4001e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fcae4001e00] ref P L0: 69.3%  8.0% 14.8%  7.9%
[libx264 @ 0x7fcae4001e00] ref B L0: 88.1%  9.2%  2.6%
[libx264 @ 0x7fcae4001e00] ref B L1: 90.2%  9.8%
[libx264 @ 0x7fcae4001e00] kb/s:13475.29
[aac @ 0x7fcae4012400] Qavg: 125.000```


    


  • FFmpeg -segment with a lot of time codes

    24 février 2020, par Grenight

    I’m making program that splits videos for running parallel encodes.

    My problem is that with putting all segments into command like i can hit a limit of maximum characters in windows console. Approximately 600, while 1-2 hour movie can easily be 1000-2000 scenes.

    Is there a way to use .csv file for segment so i don’t need to put all time codes to command line ? Or is there another ways ?

    Here is example of command line that is used in program :
    timecodes is where all time codes passed.

    cmd = f'{self.FFMPEG} -i {video} -map_metadata 0 -an -f segment -segment_times {timecodes} -c copy -avoid_negative_ts 1 {self.temp_dir / "split" / "%04d.mkv"}

  • record rtsp stream to file(muxing)

    11 avril 2014, par user3521863
    AVFormatContext *g_oc = NULL;
    AVStream *g_in_audio_st, *g_in_video_st;
    AVStream *g_out_audio_st, *g_out_video_st;
    int audio_pts = 0, video_pts = 0, audio_dts = 0, video_dts = 0;
    int last_video_pts = 0;
    AVPacket outpkt, *av_pkt;

    // initialize video codec
    static void init_video_codec(AVFormatContext *context) {
       LOGI(1, "enter init_video_codec");
       AVFormatContext *in_format_ctx = NULL;
       AVCodecContext *avcodec_ctx = NULL;
       int fps = 0;

       if(context->streams[1]->r_frame_rate.num != AV_NOPTS_VALUE &&
               context->streams[1]->r_frame_rate.den != 0)
           fps = context->streams[1]->r_frame_rate.num / context->streams[1]->r_frame_rate.den;
       else
           fps = 25;

       g_out_video_st = avformat_new_stream(g_oc, context->streams[1]->codec->codec);
       LOGI(1, "video avformat_new_stream");
       if( g_out_video_st == NULL ) {
           LOGE(1, "Fail to Allocate Output Video Stream");
           return ;
       }
       else {
           LOGI(1, "Allocated Video Stream");
           if( avcodec_copy_context(g_out_video_st->codec, context->streams[1]->codec) != 0 ) {
               LOGE(1, "Failed to video Copy Context");

               return ;
           }
           else {
               LOGI(1, "Success to video Copy Context");
    // how to setting video stream parameter?

               g_out_video_st->sample_aspect_ratio.den = g_in_video_st->codec->sample_aspect_ratio.den;
               g_out_video_st->sample_aspect_ratio.num = g_in_video_st->codec->sample_aspect_ratio.num;
               g_out_video_st->codec->codec_id         = g_in_video_st->codec->codec->id;
               g_out_video_st->codec->time_base.num    = 1;
               g_out_video_st->codec->time_base.den    = fps * (g_in_video_st->codec->ticks_per_frame);
               g_out_video_st->time_base.num           = 1;
               g_out_video_st->time_base.den           = 1000;
               g_out_video_st->r_frame_rate.num        = fps;
               g_out_video_st->r_frame_rate.den        = 1;
               g_out_video_st->avg_frame_rate.den      = 1;
               g_out_video_st->avg_frame_rate.num      = fps;
               g_out_video_st->codec->width            = g_frame_width;
               g_out_video_st->codec->height           = g_frame_height;
               g_out_video_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       LOGI(1, "end video init");
    }

    // initialize audio codec
    static void init_audio_codec(AVFormatContext *context) {
       LOGI(1, "enter init_audio_codec");
       AVFormatContext *in_format_ctx = NULL;
       AVCodecContext *avcodec_ctx = NULL;

       g_out_audio_st = avformat_new_stream(g_oc, context->streams[0]->codec->codec);
       LOGI(1, "audio avformat_new_stream");
       if( avcodec_copy_context(g_out_audio_st->codec, context->streams[0]->codec) != 0 ) {
           LOGE(1, "Failed to Copy audio Context");

           return ;
       }
       else {
           LOGI(1, "Success to Copy audio Context");
    // how to setting video stream parameter?
           g_out_audio_st->codec->codec_id         = g_in_audio_st->codec->codec_id;
           g_out_audio_st->codec->codec_tag        = 0;
           g_out_audio_st->pts                     = g_in_audio_st->pts;
           g_out_audio_st->time_base.num           = g_in_audio_st->time_base.num;
           g_out_audio_st->time_base.den           = g_in_audio_st->time_base.den;
           g_out_audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       LOGI(1, "end init audio");
    }

    // write video stream
    static void write_video_stream(AVPacket *pkt) {
       av_pkt = NULL;
       av_pkt = pkt;

       if( pkt == NULL || sizeof(*pkt) == 0 )
           return;

       av_rescale_q(av_pkt->pts, g_in_video_st->time_base, g_in_video_st->codec->time_base);
       av_rescale_q(av_pkt->dts, g_in_video_st->time_base, g_in_video_st->codec->time_base);

       av_init_packet(&outpkt);

       if( av_pkt->pts != AV_NOPTS_VALUE ) {
           if( last_video_pts == video_pts ) {
               video_pts++;
               last_video_pts = video_pts;
           }
           outpkt.pts = video_pts;
       }
       else {
           outpkt.pts = AV_NOPTS_VALUE;
       }

       if( av_pkt->dts == AV_NOPTS_VALUE )
           outpkt.dts = AV_NOPTS_VALUE;
       else
           outpkt.dts = video_pts;

       outpkt.data = av_pkt->data;
       outpkt.size = av_pkt->size;
       outpkt.stream_index = av_pkt->stream_index;
       outpkt.flags |= AV_PKT_FLAG_KEY;
       last_video_pts = video_pts;

       if(av_interleaved_write_frame(g_oc, &outpkt) < 0) {
    //  if(av_write_frame(g_oc, &outpkt) < 0) {
           LOGE(1, "Failed Video Write");
       }
       else {
           g_out_video_st->codec->frame_number++;
       }

       if( !&outpkt || sizeof(outpkt) == 0 )
           return;
       if( !av_pkt || sizeof(*av_pkt) == 0 )
           return;

       av_free_packet(&outpkt);
    }

    // write audio stream
    static void write_audio_stream(AVPacket *pkt) {
       av_pkt = NULL;
       av_pkt = pkt;

       if( pkt == NULL || sizeof(*pkt) == 0 )
               return;

       av_rescale_q(av_pkt->pts, g_in_audio_st->time_base, g_in_audio_st->codec->time_base);
       av_rescale_q(av_pkt->dts, g_in_audio_st->time_base, g_in_audio_st->codec->time_base);

       av_init_packet(&outpkt);

       if(av_pkt->pts != AV_NOPTS_VALUE)
           outpkt.pts = audio_pts;
       else
           outpkt.pts = AV_NOPTS_VALUE;

       if(av_pkt->dts == AV_NOPTS_VALUE)
           outpkt.dts = AV_NOPTS_VALUE;
       else {
           outpkt.dts = audio_pts;

           if( outpkt.pts >= outpkt.dts)
               outpkt.dts = outpkt.pts;

           if(outpkt.dts == audio_dts)
               outpkt.dts++;

           if(outpkt.pts < outpkt.dts) {
               outpkt.pts = outpkt.dts;
               audio_pts = outpkt.pts;
           }

           outpkt.data = av_pkt->data;
           outpkt.size = av_pkt->size;
           outpkt.stream_index = av_pkt->stream_index;
           outpkt.flags |= AV_PKT_FLAG_KEY;
           video_pts = audio_pts;
           audio_pts++;

           if( av_interleaved_write_frame(g_oc, &outpkt) < 0 ) {
    //      if( av_write_frame(g_oc, &outpkt) < 0 ) {
               LOGE(1, "Failed Audio Write");
           }
           else {
               g_out_audio_st->codec->frame_number++;
           }

           if( !&outpkt || sizeof(outpkt) == 0 )
               return;
           if( !av_pkt || sizeof(*av_pkt) == 0 )
               return;

           av_free_packet(&outpkt);
       }
    }

    here result : recorded file
    here full source : player.c

    I want to record rtsp stream to file on playing
    i try tested video and audio streams while changing the parameters
    but this result file does not match sync between video and audio
    i try search about ffmpeg but almost command run or video recording was only.
    please advice me.