Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (87)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (11864)

  • Use FFMPEG to Save Live CCTV Video Streams that Has Wrong FPS Encoded, Published by Video Clips instead of Frames, and With Nonnegligible Frame Loss

    6 mars 2023, par Crear

    I want to use FFMPEG command line to archive live CCTV video stream (no audio) from Newark Citizen Virtual Patrol (https://cvp.newarkpublicsafety.org) for traffic analysis, previously I was using (I'm just a noob in these commands)
os.system('ffmpeg -t 24:00:00 -i '+address+' -hide_banner -c:v copy -s 640x360 -segment_time 00:15:00 -f segment -strftime 1 -reset_timestamps 1 "'+OutPath+camera_location+'_%Y-%m-%d-%H-%M-%S.mp4"') to archive the videos everyday and segment them into 15-min-long videos.

    


    However, there are several issues.

    


      

    1. The FPS read from the video stream is actually slower than it really is. For example, it's actually 12, but the decoded result says 8, so every time it generates a 15-min-long video, it only pasts 10 11 mins in the real world.
    2. 


    3. Due to unstable frame loss, the FPS is not a stable value either. Therefore, when I manually set the FPS, it usually make the video has wrong length, and sometimes when the stream froze, it keeps waiting because it hasn't finished 15-min-long video. Something I noticed is that it may generate a 15-min-long video, which contains both night and day, started from perhaps 2AM but ended at 8AM.
    4. 


    5. The live CCTV video stream is not frame by frame, but video clip by video clip. Therefore, when I set the -use_wallclock_as_timestamps to 1, the video will be ultra-fast playing the short video clip, then frozen for the rest of time until receiving next video clip.
    6. 


    


    The only thing I can think of is to re-distribute the frames evenly between the timestamp of receiving the current video clip and the timestamp of receiving the prior video clip. What parameters can help FFMPEG to fix the FPS and archive correctly ? I am using FFMPEG to save the video instead of using OpenCV to decode the frame and then encode a video because we have huge amounts of cameras and our legacy Xeon processor had trouble encoding so many frames simultaneously.

    


    Any suggestion is appreciated !

    


  • libvpx 0.9.1 and FFmpeg 0.6

    18 juin 2010, par Multimedia Mike — VP8

    Great news : Hot on the heels of FFmpeg’s 0.6 release, the WebM project released version 0.9.1 of their libvpx. I can finally obsolete my last set of instructions on getting FFmpeg-svn working with libvpx 0.9.

    Building libvpx 0.9.1
    Do this to build libvpx 0.9.1 on Unix-like systems :

    libvpx’s build system has been firmed up a bit since version 0.9. It’s now smart enough to install when said target is invoked and it also builds the assembly language optimizations. Be advised that on 32- and 64-bit x86 machines, Yasm must be present (install either from source or through your package manager).

    Building FFmpeg 0.6
    To build the newly-released FFmpeg 0.6 :

    • Install Vorbis through your package manager if you care to encode WebM files with audio ; e.g., ’libvorbis-dev’ is the package you want on Ubuntu
    • Download FFmpeg 0.6 from the project’s download page
    • Configure FFmpeg with at least these options : ./configure --enable-libvpx --enable-libvorbis --enable-pthreads ; the final link step still seems to fail on Linux if the pthreads option is disabled
    • ’make’

    Verifying
    Check this out :

    $ ./ffmpeg -formats 2> /dev/null | grep WebM
      E webm            WebM file format
    

    $ ./ffmpeg -codecs 2> /dev/null | grep libvpx
    DEV libvpx libvpx VP8

    That means that this FFmpeg binary can mux a WebM file and can both decode and encode VP8 video via libvpx. If you’re wondering why the WebM format does not list a ’D’ indicating the ability to demux a WebM file, that’s because demuxing WebM is handled by the general Matroska demuxer.

    Doing Work
    Encode a WebM file :

    ffmpeg -i <input_file> <output_file.webm>

    FFmpeg just does the right thing when it seems that .webm extension on the output file. It’s almost magical.

    For instant gratification that the encoded file is valid, you can view it immediately using ’ffplay’, if that binary was built (done by default if the right support libraries are present). If ffplay is not present, you can always execute this command line to see some decode operation :

    ffmpeg -i <output_file.webm> -f framecrc -

  • How to concatenate two MP4 files, which require http basic Authorization : Bearer , using ffmpeg ?

    8 juillet 2023, par Jeff Strongman

    Hello dear ffmpeg experts ! 🧠 🎯

    


    I ran the following command, which worked perfectly :

    


    ffmpeg -protocol_whitelist https,concat,tls,tcp -i "concat :https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_0.m4v|https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_1.m4v|https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_2.m4v" -c:v copy -vframes 180 -y Movie_of_6_seconds.mp4

    


    I followed the recommended solution of the following post :
How to concatenate two MP4 files using FFmpeg ?

    


    You can execute the command on your local computer and see that it should run just fine...

    


    I used 3. concat protocol, which does indeed concat init + progressive segments

    


    However... when every segment on a server I refer to, is password protected, it fails with 401 Unauthorized, even though I added the following line :
-headers "Authorization : Bearer bas64user:password" , before specifying the -i "concat :...".

    


    It seems to me... that the headers don't pass to the concat command inside of the input of ffmpeg... and it simply ignores them. When I used the same -headers command, on a single file, without concat, it passed the authorization successfully

    


    Notes :

    


      

    • Even though every segment has a length of 120 frames (So in maximum, I could have generated 2*120 = 240 frames... I wanted a movie of 6 seconds and not 8... And by this way, to test that ffmpeg is smart enough to stop processing the whole input). To do that, I used -vframes 180, where 180 / 30 (FPS) = 6 seconds
    • 


    • I used -c:v copy, to get without re-encoding, only the video part (No audio !)
    • 


    • I used -y to override existing file...
    • 


    • 0.m4v, is the init file ! it is a small file, that has metadata of the original video which was produced with mpeg-dash
    • 


    • 1.m4v and 2.m4v, are the progressive segments
    • 


    


    Is there a way, to pass the http basic headers (Authorization : Bearer) to all of the chained files ?

    


    Like :

    


      

    • Via a json content type on the ffmpeg request
    • 


    • Or user:password@video_segment (Although... it seems to me it's not a header ?)
    • 


    • Somehow specify header inside the concat command ?
    • 


    


    I don't want to first download all files and then get rid of the password protected... as it both takes ridiculous time & other resources... and I would like to record from a segment that is "endless", meaning a camera that keeps streaming data.

    


    Thanks in advance 🙏🏻,

    


    FFmpeg noobie 🙈