Recherche avancée

Médias (91)

Autres articles (106)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (10159)

  • ffmpeg with multiple live-stream inputs adds async delay after filter

    12 janvier 2021, par Godmar

    I am struggling to apply ffmpeg for remote control of autonomous truck.

    



    There are 3 video streams from cameras in local network, described with .sdp files like this one (MJPEG over RTP, correct me if I'm wrong) :
m=video 50910 RTP/AVP 26
c=IN IP4 192.168.1.91

    



    I want to make a single video stream from three pictures combined using this :

    



    ffmpeg -hide_banner -protocol_whitelist "rtp,file,udp" -i "cam1.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam2.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam3.sdp" \
-filter_complex "\
nullsrc=size=1800x600 [back]; \
[back][b]overlay=1000[tmp1]; \
[tmp1][c]overlay=600[tmp2]; \
[tmp2][a]overlay" \
-vcodec libx264 \
-crf 25 -maxrate 4M -bufsize 8M -r 30 -preset ultrafast -tune zerolatency \
-f mpegts udp://localhost:1234


    



    When i launch this, the ffmpeg starts sending errors about RTP packets being lost. In the output the fps of every camera seems unstable, so this is unacceptable.
I am able to launch ffplay or mplayer on three cameras simultaneously. And I also can make such stream using pre-recorded videofile as input. So it seems like the ffmpeg just can't read three UDP streams so fast.
The cameras are streaming at 10 Mbit/s, 800x600, 30 fps MJPEG, and those are the minimal settings I can afford, but the cameras can do much more.

    



    So I tried to do something to the size of UDP buffer. Well, there is a possibility to setup buffer_size and fifo_size for a UDP stream, but no such option for a stream described with an .sdp file. Even though I've found a way to run the stream with rtp://-like URL, but it doesn't seem to pass the arguments after ' ?' to the UDP.

    



    My next idea was to launch multiple ffmpeg instances and receive the streams separately, process them and re-stream to another instance, which would consume any kind of stream, stitch them together and send out. That would actually be a good setup, since I need to filter the streams individually, crop them, lenscorrect, rotate, and maybe a large -filter_complex on a single ffmpeg instance would not handle all the streams. And I'm going to have 3 more of them.

    



    I tried to implement this setup using 3 fifopipe or using 3 udp://localhost:124x internal streams. None of the approaches solved my problem, but the separated ffmpeg instances seem to be able to receive three streams simultaneously.
I was able to open the repeated stream through pipes and through UDP via mplayer or ffplay. They are completely synced and live
The stitching still fails miserably.
The pipes got me a few seconds delays for cameras, and after stitching streams were choppy and out of sync.
The udp :// got me a smooth video stream as a result, but one camera has 5 sec delay, and the others have 15 and 25.

    



    This smells like buffer. Changing the fifo_size and buffer_size doesn't seem to influence much.
I tried to add local time timestamp in re-streamer instances - this is how I found the 5, 15, 25sec delays.
I tried to add frame timestamp in stitcher instance - they come out completely synced. So setpts=PTS-STARTPTS doesn't work either.

    



    So, the buffer happens between the udp :// socket and the -filter_complex input. How do I get rid of it ? How do you like my workaround ? Am I doing it completely wrong ?

    


  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    


    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    


    As a minimal example, these are my commands :

    


    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - 

ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4

# same for video2.mp4

ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "
    [0:v]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];  // blur the video
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video
    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location
    [1:v]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2  // concatenate the two clips" 
-c:v libx264 -r 30 out.mp4


    


    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    


    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    


    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "
    [0:v]vidstabtransform=input=transform1.trf[v0stab]
    [v0stab]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];
    [v0blur][v0scale]overlay=x=100:y=200[v0];
    [1:v]vidstabtransform=input=transform2.trf[v1stab]
    [v1stab]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2"
-c:v libx264 -r 30 out.mp4


    


  • avformat/matroskaenc : Don't waste bytes writing level 1 elements

    20 avril 2019, par Andreas Rheinhardt
    avformat/matroskaenc : Don't waste bytes writing level 1 elements
    

    Up until now, the length field of most level 1 elements has been written
    using eight bytes, although it is known in advance how much space the
    content of said elements will take up so that it would be possible to
    determine the minimal amount of bytes for the length field. This
    commit changes this.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/matroskaenc.c
    • [DH] tests/fate/matroska.mak
    • [DH] tests/fate/wavpack.mak
    • [DH] tests/ref/fate/aac-autobsf-adtstoasc
    • [DH] tests/ref/fate/binsub-mksenc
    • [DH] tests/ref/fate/rgb24-mkv
    • [DH] tests/ref/lavf/mka
    • [DH] tests/ref/lavf/mkv
    • [DH] tests/ref/lavf/mkv_attachment
    • [DH] tests/ref/seek/lavf-mkv