Recherche avancée

Médias (91)

Autres articles (41)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8196)

  • ffmpeg - overlay multiple fading texts with different colors

    18 novembre 2017, par Abc123

    I have problem with this ffmpeg command, it works fine if the fading text is in white font color, but if I change the fontcolor to something else (for example black), the fading text will not appear, any ideas ?

    ffmpeg -i ./based_video/480/clip3.mp4 -filter_complex "color=black:100x100[c]; [c][0]scale2ref[ct][mv0]; \
    [ct]setsar=1,split=3[t1][t2][t3]; \
    [t1]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text1][alpha1]; \
    [text1][alpha1]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta1]; \
    [t2]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text2][alpha2]; \
    [text2][alpha2]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta2]; \
    [t3]drawtext=fontfile=/usr/share/fonts/truetype/roboto/Roboto-Bold.ttf:text='\$30,000.0':fontsize=40:fontcolor=white,split[text3][alpha3]; \
    [text3][alpha3]alphamerge,fade=t=in:st=1:d=1:alpha=1,fade=t=out:st=5:d=1:alpha=1[txta3]; \
    [mv0][txta1]overlay=x='100':y='200':shortest=1[mv1]; \
    [mv1][txta2]overlay=x='300':y='200':shortest=1[mv2]; \
    [mv2][txta3]overlay=x='500':y='200':shortest=1" \
    -c:v libx264 -c:a copy ./output_video/testnew-clip3-output.mp4

    full log is here :
    https://docs.google.com/document/d/1y9Dnn0Df75J8P_hZ6LjHTX2dk-8z97UnTjlX8dnc0v0/edit?usp=sharing

    Thanks in advance

  • ffmpeg x11grab moov atom not found

    30 mars 2021, par Jintor

    2 FFMPEG process

    


    (1) generating a ffmpeg x11grab to a .mp4
(2) take the .mp4 and restream it simultaneously to multiple rtmp endpoints

    


    ISSUE the generated file in (1) have this error "moov atom not found"

    


    This is the command that generate (1) :

    


    ffmpeg -re -y -f x11grab -draw_mouse 0 -framerate 30 
-video_size $RESOLUTION -i :$DISPLAY_NUM -c:a aac -c:v libx264 
-movflags +faststart -preset ultrafast -crf 28 -refs 4 -qmin 4 
-pix_fmt yuv420p -filter:v fps=30 file.mp4


    


    in the (2) => when I try to ffmpeg -i file.mp4 output somewhere : I get "moov atom not found" so the (2) can't read or open (1).

    


    What I'm I missing

    


    in (1) -movflags +faststart doesn't seem to fix the issue

    


    ••••••• EDIT : more details on the context ••••••

    


    I'm using openvidu : webrtc with kurento and coturn.

    


    The record feature creates a .mp4 on the fly as the chat is going on.

    


    To start the recording, there is an API call i can make to my server and it automatically stops when all users leaves the chatroom OR do an other api call to stop. see composed video in this link https://docs.openvidu.io/en/2.17.0/advanced-features/recording/

    


    openvidu have also webhooks.

    


    My problem is not how to stop ffmpeg, but getting FFMPEG to encode while the mp4 or other is being generated "on the fly".

    


    There is 2 options :

    


    OPTION 1 : individual => 1 .webm per camare => this .webm ffmpeg can restream as hls or RTMP => it's working.

    


    OPTION 2 : ** but the issue is with "Composed" video => it's using ffmpeg to x11grab the session... but it's mp4 without moov ato, so ffmpeg don't do anything with this.

    


    see the composed.sh script here
https://github.com/OpenVidu/openvidu/blob/master/openvidu-server/docker/openvidu-recording/scripts/composed.sh

    


  • avfilter/zoompan : add in_time variable

    19 juin 2020, par exwm
    avfilter/zoompan : add in_time variable
    

    Currently, the zoompan filter exposes a 'time' variable (missing from docs) for use in
    the 'zoom', 'x', and 'y' expressions. This variable is perhaps better named
    'out_time' as it represents the timestamp in seconds of each output frame
    produced by zoompan. This patch adds aliases 'out_time' and 'ot' for 'time'.

    This patch also adds an 'in_time' (alias 'it') variable that provides access
    to the timestamp in seconds of each input frame to the zoompan filter.
    This helps to design zoompan filters that depend on the input video timestamps.
    For example, it makes it easy to zoom in instantly for only some portion of a video.
    Both the 'out_time' and 'in_time' variables have been added in the documentation
    for zoompan.

    Example usage of 'in_time' in the zoompan filter to zoom in 2x for the
    first second of the input video and 1x for the rest :
    zoompan=z='if(between(in_time,0,1),2,1):d=1'

    V2 : Fix zoompan filter documentation stating that the time variable
    would be NAN if the input timestamp is unknown.

    V3 : Add 'it' alias for 'in_time. Add 'out_time' and 'ot' aliases for 'time'.
    Minor corrections to zoompan docs.

    Signed-off-by : exwm <thighsman@protonmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_zoompan.c