Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (53)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (6063)

  • FFMPEG - concantenating mp4s from different sources - unable to stop "Non-monotonous DTS in output stream" warning

    7 août 2018, par Sam P

    I need to concatenate mp4 files from different sources, this means some of the variables are out of my control such as timebase, aspect ratio and encoding. So to get around this I re-encode and attempt to standardise the files before concantenating them. Unfortunately, despite this I get Non-monotonous DTS in output stream warnings during the concatenation stage, and the output video seems to always have broken audio/video syncing by the last segment.

    I know there are a lot of other questions out there about resolving the warning above, but I’ve been through them all and reviewed the documentation.. but unfortunately I’ve been still been unable to solve it..

    I think the thing which I don’t understand is : if I have mp4s from different sources, what exactly do I need to do to ensure that the files will always neatly concatenate together ?

    What I’ve tried so far

    The script I’m using to standardise the mp4 files before concantenation is the following (amends resolution, frame rate, timebase, bitrate for audio, bitrate for video, audio encoding and video encoding) :

    ffmpeg -y -i $1 -vf 'scale=1280:720:force_original_aspect_ratio=1,pad=1280:720:(ow-iw)/2:(oh-ih)/2' -r 30 -video_track_timescale 90000 -b:a 128K -b:v 1200K -c:a aac -c:v libx264 $2

    Here’s the ffprobe output on two of the files, there are some differences but I’m not sure if they are significant ?

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:08.98, start: 0.000000, bitrate: 1210 kb/s
       Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1069 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'middle.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:59.72, start: 0.000000, bitrate: 1200 kb/s
       Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1063 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    They all have normal video and audio at this point.

    After that I concatenate them and add a watermark using the following (it sucks that I need to re-encode here) :

     ffmpeg -y \
       -f concat \
       -safe 0 \
       -i $INFILES \
       -c:v libx264 \
       -c:a copy \
       -preset fast \
       -vf drawtext=enable="'between(t, $DRAW_TEXT_DELAY, $DRAW_TEXT_DURATION)': fontfile=$FONT_DIR/$FONT: text='$TEXT': fontcolor=$FONTCOLOR: fontsize=$FONTSIZE: $POSITION" \
       $OUTFILE

    INFILES is a path to a text file formatted like :

    file /usr/src/app/data/test/out/intro.mp4
    file /usr/src/app/data/test/out/middle.mp4
    file /usr/src/app/data/test/out/outro.mp4

    What am I missing here ? Is there a way to debug this further ?

  • how encode xdcam.mxf and set for mediainfo flag "standard : component"

    3 août 2018, par livio

    My goal is to encode an XdcamHD 50Mb .mxf, with ffmpeg or ffmbc or bmx, and get the following Mediainfo technical and tag data

    Format Settings, Matrix : Custom

    Standard : Component

    enter image description here

    When I try to convert with FFmbc I get a correct XdcamHD in every aspects, except the following Mediainfo technical data :

    Format Settings, Matrix : Default

    Standard : PAL

    enter image description here

    The target file and my file analized whith ffprobe are identical

    on the left target file on the right my encoded file

    enter image description here

    and this is my code

    ffmbc.exe -i %1 -tff -target xdcamhd422 -t 5 -y rewrapffmbc.mxf

    When i try to convert with ffmpeg I obtain the same good file but if I read the technical and tag data in Mediainfo the flag "Standard :" has disappeared. Also in this case my file is being rejected from a Broadcast Company we deal with.

    here is the ffmpeg code

    ffmpeg.exe  -i %1 -r 25 -aspect 16:9 -pix_fmt yuv422p -color_primaries 1 -color_trc 1 -colorspace 1 -vcodec mpeg2video -non_linear_quant 1 -flags +ildct+ilme -top 1 -intra_vlc 1 -qmax 3 -lmin "1*QP2LAMBDA" -vtag xd5c -rc_max_vbv_use 1 -rc_min_vbv_use 1 -g 12 -b:v 50000k -minrate 50000k -maxrate 50000k -bufsize 3835k -bf 2 -trellis 1 -map 0:0 -map 0:1 -map 0:2 -map 0:1 -map 0:2 -map 0:1 -map 0:2 -map 0:1 -map 0:2  -map_channel 0.1.0:0.1.0 -map_channel 0.2.0:0.2.0  -map_channel 0.1.0:0.3.0 -map_channel 0.2.0:0.4.0  -map_channel 0.1.0:0.5.0 -map_channel 0.2.0:0.6.0  -map_channel 0.1.0:0.7.0 -map_channel 0.2.0:0.8.0 -c:a pcm_s24le -ar 48000 -ac 1 -map_metadata 0  -timecode 09:59:59:20 -y profilo-1.mxf

    Can someone provide a solution or a workaround ?

    Thank you

  • ffmpeg in "streaming mode" suddenly stops writing output file and resumes after several minutes

    29 juin 2018, par EliA

    I’m using ffmpeg to convert chunks of media in webm format to a wav file in a "continuous stream" mode.

    Using a python wrapper, I’m feeding ffmpeg a webm chunk (let’s say 1 chunk each second) and ffmpeg continuously writes the output wav (then, in a separate thread I’m reading this wav file and pass the new data forward).

    This works well most of the time, but some times, I get this weird behavior :

    • after some chunks are processed and output to the wav file, ffmpeg stops writing to the output file (the file stops growing but ffmpeg keeps getting the webm chunks)
    • this can go on like this for several minutes, and then ffmpeg resumes writing to the wav file
    • eventually, the complete wav file is written successfully, but this causes huge delays in the streaming process.

    Looking at CPU and memory usage, this doesn’t seem to be the issue.

    Any ideas as to what could cause ffmpeg to stop streaming for several minutes and then resume ?
    This can happen after 30 seconds, after 3 minutes or after an hour of streaming media chunks and the time it takes ffmpeg to resume also varies.

    In my python code, I’m basically opening a subprocess.Popen() process and write to the process’ stdin to feed ffmpeg the media chunks.

    So my code looks something like :

    ffmpeg_proc = subprocess.Popen('ffmpeg -y -f webm -i - -ac 1 -ar 16000  -f wav /tmp/out.wav', shell=True, stdin=subprocess.PIPE)

    for chunk in chunks:
       ffmpeg_proc.stdin.write(chunk)

    I’m using python 3.6 and ffmpeg version 2.8.14-0ubuntu0.16.04.1

    Thanks in advance !