Recherche avancée

Médias (91)

Autres articles (84)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4545)

  • ffmpeg live stream transcoding. A/V sync issues on fast camera movement

    21 août 2020, par Kelsnare
      

    1. I create a webrtc peer connection with my server(only stun)
    2. 


    3. Using pion webrtc for the server
    4. 


    5. I write the received RTP packets as VP8 and opus streams, as described here, to two pipes (the writers ; created with os.Pipe() in golang)
    6. 


    7. The read ends of these two pipes are received by ffmpeg as inputs (via exec.Command.ExtraFiles) for transcoding using libx264 and aac into a single stream. The command :
    8. 


    


    ffmpeg -re -i pipe:3 -re -r pipe:4 -c:a aac -af aresample=48000 -c:v libx264 -x264-params keyint=48:min-keyint=24 -profile:v main -preset ultrafast -tune zerolatency -crf 20 -fflags genpts -avoid_negative_ts make_zero -vsync vfr -map 0:0,0:0 -map 1:0,0:0 -f matroska -strict -2 pipe:5


    


      

    1. The above command outputs to a pipe(:5) the read end of which is being taken as input by the following :
    2. 


    


    ffmpeg -hide_banner -y -re -i pipe:3 -sn -vf scale=-1:'min(ih,360)' -c:v libx264 -pix_fmt yuv420p -ca aac -b:a 128k -b:v 1400k -maxrate 1498k -bufsize 2100k -hls_time 1 -hls_playlist_type event -hls_base_url /workdir/streamID/360p -hls_segment_filename /workdir/streamID/360p/360_%%03d.ts -f hls /workdir/streamID/360p.m3u8


    


      

    1. This works fine as long as there are no movements of my webcam. The moment that happens the video speed suddenly increases for a split second and audio delay gets introduced. This delay keeps increasing each time I shake my webcam.
    2. 


    


    The first command in point 4 above - if written to a file separately will be absolutely fine, in terms of a/v sync, even with vigorous camera shaking. The weird audio delay is only when transcoding for hls output irrespective of whether I'm actually viewing it live or playing it back later.

    


    This is my first time working with ffmpeg/hls/webrtc - would be really helpful if I could be pointed in the correct direction at least to be able to debug this or even know why this happens. Any and all help is greatly appreciated

    


  • FFMPEG save last 10 sec before a movement and next 30secs

    2 novembre 2020, par Set1ish

    I have a surveillance camera that process frame by frame live video. In case of movement, I want to save a video containing the last 10 seconds before the movement and next 30 seconds after the movement.
I beleve (may be I'm wrong), that last 10 second + next 30seconds task, should be obtained without decoding-rencoding process.

    


    I try to use python with fmmpeg pipes, creating a reader and a writer, but the reader seams too slow for the stream and I loose some packets (= loose on video quality for saved file).

    


    Here my code

    


    import ffmpeg
import numpy as np

width = 1280
height = 720


process1 = (
    ffmpeg
    .input('rtsp://.....',rtsp_transport='udp', r='10', t="00:00:30")
    .output('pipe:', format='rawvideo', pix_fmt='yuv420p')
    .run_async(pipe_stdout=True)
)

process2 = (
    ffmpeg
    .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
    .output("prova-02-11-2020.avi", pix_fmt='yuv420p',r='10')
    .overwrite_output()
    .run_async(pipe_stdin=True)
)
while True:
    in_bytes = process1.stdout.read(width * height * 3)
    if not in_bytes:
        break
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        )

    #In future I will save in_frame in a queue
    out_frame = in_frame
   
    process2.stdin.write(
        out_frame
        .astype(np.uint8)
        .tobytes()
    )

process2.stdin.close()
process1.wait()
process2.wait()


    


    If I run

    


    ffmpeg -i rtsp://... -acodec copy -vcodec copy -t "00:00:30" out.avi


    


    It look that decode-rencode process is done in quick/smart way without loosing any packet.
My dream is to make the same on python for the surveillance camera but intergrating with code that analyse the stream.

    


    I would like that the flow for creating the file, does not requires decoding + enconding. The last 10secs frames are in a queue and, at specific event, the contenet of queue plus next 30secs frames are saved into a avi file

    


    I have the constraints to have realtime motion detection on live streaming

    


    Did you have any comments or suggestion ?

    


  • Is there a way to cut movement "dead air" on a screen recording ? [closed]

    16 mai 2023, par Raelbe

    I have got a couple of screen recordings of a painting I've done, and I've managed to concat the files together.

    


    Unfortunately, there is a lot of "dead air" in the video (where I have left my desk, so there is no movement happening on screen) is there a way to cut out this down time ?

    


    I found an example that another artist uses for his screen recordings, so I plugged it in with my file directory's. This is what I used :

    


    .\ffmpeg -f concat -safe 0 -i "merge.txt" -vf npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,"setpts=0.25*PTS" -r 30 -crf 30 -an Illu_Test.mp4


    


    I got this error message at the end :

    


    [AVFilterGraph @ 000001cadfe5b1c0] No option name near 'N/30/TB'
[AVFilterGraph @ 000001cadfe5b1c0] Error parsing a filter description around: ,setpts=0.25*PTS
[AVFilterGraph @ 000001cadfe5b1c0] Error parsing filterchain 'npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,setpts=0.25*PTS' around: ,setpts=0.25*PTS
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0`


    


    So I chopped it up a bit and this is what I used to concat the files and it worked perfectly.

    


    .\ffmpeg -f concat -safe 0 -i "merge.txt" -crf 30 -an Illu_Test.mp4


    


    Now, I'm looking to cut out the seconds of no movement. I'm unsure what the -crf command does (as stated I am brand new to this) OG artist states that :

    


    "This is the tolerance level that determines whether there has been enough change between frames or not to be considered as detected motion."

    


    Any help would be appreciated.

    


    (Apologies if the format of this question is wrong)