Recherche avancée

Médias (91)

Autres articles (41)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (7630)

  • FFMPEG - concatenating mp4s from different sources - unable to stop "Non-monotonous DTS in output stream" warning

    19 novembre 2018, par Sam P

    I need to concatenate mp4 files from different sources, this means some of the variables are out of my control such as timebase, aspect ratio and encoding. So to get around this I re-encode and attempt to standardise the files before concatenating them. Unfortunately, despite this I get Non-monotonous DTS in output stream warnings during the concatenation stage, and the output video seems to always have broken audio/video syncing by the last segment.

    I know there are a lot of other questions out there about resolving the warning above, but I’ve been through them all and reviewed the documentation.. but unfortunately I’ve been still been unable to solve it..

    I think the thing which I don’t understand is : if I have mp4s from different sources, what exactly do I need to do to ensure that the files will always neatly concatenate together ?

    What I’ve tried so far

    The script I’m using to standardise the mp4 files before concantenation is the following (amends resolution, frame rate, timebase, bitrate for audio, bitrate for video, audio encoding and video encoding) :

    ffmpeg -y -i $1 -vf 'scale=1280:720:force_original_aspect_ratio=1,pad=1280:720:(ow-iw)/2:(oh-ih)/2' -r 30 -video_track_timescale 90000 -b:a 128K -b:v 1200K -c:a aac -c:v libx264 $2

    Here’s the ffprobe output on two of the files, there are some differences but I’m not sure if they are significant ?

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:08.98, start: 0.000000, bitrate: 1210 kb/s
       Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1069 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'middle.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:59.72, start: 0.000000, bitrate: 1200 kb/s
       Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1063 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    They all have normal video and audio at this point.

    After that I concatenate them and add a watermark using the following (it sucks that I need to re-encode here) :

     ffmpeg -y \
       -f concat \
       -safe 0 \
       -i $INFILES \
       -c:v libx264 \
       -c:a copy \
       -preset fast \
       -vf drawtext=enable="'between(t, $DRAW_TEXT_DELAY, $DRAW_TEXT_DURATION)': fontfile=$FONT_DIR/$FONT: text='$TEXT': fontcolor=$FONTCOLOR: fontsize=$FONTSIZE: $POSITION" \
       $OUTFILE

    INFILES is a path to a text file formatted like :

    file /usr/src/app/data/test/out/intro.mp4
    file /usr/src/app/data/test/out/middle.mp4
    file /usr/src/app/data/test/out/outro.mp4

    What am I missing here ? Is there a way to debug this further ?

  • ffmpeg : "Referenced QT chapter track not found"

    29 avril 2019, par Ze'ev

    Using ffmpeg to replace audio in a QuickTime with audio from a WAV.

    Anyone know why I’m getting Referenced QT chapter track not found ?

    Command :

    $ ffmpeg \
    -i "$video" -t 25 \
    -i "$audio" -map 0:v -c:v copy -map 1:a -c:a pcm_s24le -ar 48000 \
    -hide_banner "$output"

    Output :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7faf62010600] Referenced QT chapter track not found
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mov':
     Metadata:
       major_brand     : qt
       minor_version   : 537199360
       compatible_brands: qt
       creation_time   : 2018-11-06T09:27:43.000000Z
     Duration: 00:00:25.00, start: 0.000000, bitrate: 186987 kb/s
       Stream #0:0(eng): Video: prores (apch / 0x68637061), yuv422p10le(bt709, progressive), 1920x1080, 185115 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
       Metadata:
         creation_time   : 2018-11-06T09:27:43.000000Z
         handler_name    : Apple Alias Data Handler
         encoder         : Apple ProRes 422 (HQ)
         timecode        : 00:00:00:00
       Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
       Metadata:
         creation_time   : 2018-11-06T09:27:43.000000Z
         handler_name    : Apple Alias Data Handler
         timecode        : 00:00:00:00
       Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
       Metadata:
         creation_time   : 2018-11-06T09:27:43.000000Z
         handler_name    : Apple Alias Data Handler
         timecode        : 00:00:00:00
    Guessed Channel Layout for Input Stream #1.0 : stereo
    Input #1, wav, from 'audio.wav':
     Metadata:
       encoded_by      : Pro Tools
       originator_reference: aaOpKJaTN7Nk
       date            : 2018-11-08
       creation_time   : 13:53:50
       time_reference  : 166698000
     Duration: 00:00:25.00, bitrate: 2128 kb/s
       Stream #1:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s32 (24 bit), 2116 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #1:0 -> #0:1 (pcm_s24le (native) -> pcm_s24le (native))
    Press [q] to stop, [?] for help
    Output #0, mov, to 'test19.mov':
     Metadata:
       major_brand     : qt
       minor_version   : 537199360
       compatible_brands: qt
       encoder         : Lavf58.12.100
       Stream #0:0(eng): Video: prores (apch / 0x68637061), yuv422p10le(bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 185115 kb/s, 0.04 fps, 25 tbr, 12800 tbn, 25 tbc (default)
       Metadata:
         creation_time   : 2018-11-06T09:27:43.000000Z
         handler_name    : Apple Alias Data Handler
         encoder         : Apple ProRes 422 (HQ)
         timecode        : 00:00:00:00
       Stream #0:1: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s
       Metadata:
         encoder         : Lavc58.18.100 pcm_s24le
    frame=  625 fps=277 q=-1.0 Lsize=  566343kB time=00:00:24.96 bitrate=185876.0kbits/s speed=11.1x
    video:564928kB audio:1406kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.001496%

    Same error with -map 0:v:0

  • ffmpeg : "Impossible to convert between the formats" when using Cuda hardware acceleration

    21 août 2020, par MorenoGentili

    I'm using ffmpeg from the command line on Windows 10 and I wanted to gave Cuda a try to improve execution times. A simple cut command like this works fine and its execution time is reduced by 60-70%. Awesome.

    



    ffmpeg -hwaccel cuvid -c:v h264_cuvid -ss 00:00:10 -i in.mp4 -c:v h264_nvenc out.mp4


    



    Now I tried to use the -filter_complex flag to overlay, fade and translate a png image over a video. The working non-cuda enhanced command is this one :

    



    ffmpeg -i in.mp4 -loop 1 -t 75 -i overlay.png -filter_complex "[1:v]fade=t=in:st=30:d=0.3:alpha=1,fade=t=out:st=35.7:d=0.3:alpha=1[png1];[0:v][png1]overlay=x='if(gte(t,30), (t-30)*10, NAN)'" -movflags +faststart out.mp4


    



    Then, I added the cuda-related flags to the command like this.

    



    ffmpeg -hwaccel cuvid -c:v h264_cuvid -i in.mp4 -loop 1 -t 75 -i overlay.png -filter_complex "[1:v]fade=t=in:st=30:d=0.3:alpha=1,fade=t=out:st=35.7:d=0.3:alpha=1[png1];[0:v][png1]overlay=x='if(gte(t,30), 60-tanh((t-30)*30/5)*60, NAN)'" -movflags +faststart -c:v h264_nvenc out.mp4


    



    But it won't work. I get this error.

    



    Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0


    



    I don't even know what it means. Can I actually run ANY ffmpeg command on the GPU with Cuda ? I've found some information about the hwupload_cuda flag but I'm not sure if I should use it and how. My attempts have failed so far.
Any advice on how I should modify the command to make it work on the GPU ?
Thanks.