Recherche avancée

Médias (91)

Autres articles (111)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (4943)

  • Using FFMPEG to add pillar bars

    10 février 2021, par Ewok BBQ

    I have transferred some film to video files from 16mm (native 4:3). The image looks great.

    


    When I scanned them, I scanned to a native 16:9. As I overscanned them, I got the entire height of the frame, which is what I want. But it also got the soundtrack and perforation. But I want to go just to the frame line on the sides as well.

    


    I can CROP the image down with FFMPEG to remove the information outside of the framing I want [-vf crop=1330:1080:00:00].
I know this will result in a non-standard aspect ratio.
This plays fine on a computer (vlc just adapts to the non-standard).

    


    But for standardized delivery, I would love to keep the native 1920x1080 pixels, but just make everything outside of the centered 1330:1080 black.

    


    Is there a way to specifically select where the pillar bars are ?

    


    I really want to re-encode the video as little as possible.

    


    In that vein, does anyone have a better tool than -vf crop as well ?

    


    thank you very very much.

    


  • ffmpeg : adding a stream non-muxed stream with correct codec type tagging

    13 janvier 2021, par Hamish

    In common use, I believe ffmpeg requires inputs to be in a specified muxer format which contains one or more data streams which can be decoded with a codec supported by the demuxer associated with the format. I have a data stream (not audio or video) which is already encoded with a codec but is not muxed. How can I get this stream into the ffmpeg pipeline with the correct codec type assigned so that the muxer knows what to do with it ?

    


    I have tried streaming the data over UDP and specifying the data demuxer. With some combinations I get get it to say it's streaming, I can never get a player to connect, presumable because the output of mpegts is either null or invalid. Command line :

    


    ffmpeg -v verbose ^
-f flv -listen 1 -i rtmp://127.0.0.1:1101 ^
-f data -i udp://127.0.0.1:1300 ^
    -map 0:v -vcodec mpeg2video -map 1:d -f mpegts -mpegts_m2ts_mode 1  udp://localhost:1200


    


    Result (partial) :

    


    Input #0, flv, from 'rtmp://127.0.0.1:1101':
  Metadata:
    encoder         : Lavf58.29.100
  Duration: 00:00:00.00, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 5760x1080 (5760x1088), 30 fps, 30 tbr, 1k tbn, 60 tbc
    Stream #0:1: Audio: mp3, 48000 Hz, stereo, fltp, 128 kb/s
Input #1, data, from 'udp://127.0.0.1:1300':
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #1:0: Data: none
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mpeg2video (native))
  Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 00000180d48ae700] Reinit context to 5760x1088, pix_fmt: yuv420p
[graph 0 input from stream 0:0 @ 00000180d489dcc0] w:5760 h:1080 pixfmt:yuv420p tb:1/1000 fr:30/1 sar:0/1
[mpegts @ 00000180d5f64040] Cannot automatically assign PID for stream 1
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
[AVIOContext @ 00000180d48b83c0] Statistics: 0 seeks, 0 writeouts
[AVIOContext @ 00000180d4882080] Statistics: 185593 bytes read, 0 seeks
[AVIOContext @ 00000180d5f469c0] Statistics: 204 bytes read, 0 seeks
Conversion failed!


    


    The codec type name is klv, which has the tag KLVA. It is only supported by the mpegts and mxf (de)muxers. I presume there must be a way of getting into the pipeline without having a valid mpegts or mxf stream in the first place otherwise we have a kind of paradox.

    


    I've tried specifying the codec on the input, but it fails validation, I guess because the data demuxer does not support it.

    


    Somehow mp4 files can be muxed from elemental streams (h264 and aac files), but I guess there must be some special case in the code to force the codec type based on the file extension.

    


    I would really love to do this from the command line with a public build but if this is absolutely not possible, I would also welcome some advice about it could be achieved from C++ code.

    


  • ffmpeg lags when streaming video+audio from RPi Zero W with Logitech C920

    7 janvier 2021, par Ema

    I've been trying to setup a baby monitor with a Raspberry Pi Zero and a Logitech C920 webcam. I does work with VLC (cvlc) but it lags too much and gets worse over time.

    


    So I am playing around with ffmpeg and I am getting some better results. This is what I've done so far.

    


    First I set the webcam to output h264 1080p natively (the Pi Zero W can't afford to do any transcoding).

    


    v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1


    


    Now, if I stream audio only with

    


    ffmpeg \
-f alsa \
-i hw:1,0 \
-vn \
-flags +global_header \
-acodec aac \
-ac 1 \
-ar 16000 \
-ab 16k \
-f rtp rtp://192.168.0.10:5002 > audio.sdp


    


    it works great and the lag is about 1 second (definitely acceptable).

    


    If I stream video only with

    


    ffmpeg \
-f v4l2 \
-vcodec h264 \
-i /dev/video0 \
-an \
-vcodec copy \
-pix_fmt yuv420p \
-r 30 \
-b:v 512k \
-flags +global_header \
-f rtp rtp://192.168.0.10:5000 > video.sdp


    


    same result, very little lag (for some reason the first -vcodec is necessary to force the webcam to output h264).

    


    However, when I stream both with

    


    ffmpeg \
-f v4l2 \
-vcodec h264 \
-i /dev/video0 \
-f alsa \
-i hw:1,0 \
-an \
-preset ultrafast \
-tune zerolatency \
-vcodec copy \
-pix_fmt yuv420p \
-r 30 \
-b:v 512k \
-flags +global_header \
-f rtp rtp://192.168.0.10:5000 \
-vn \
-flags +global_header \
-acodec aac \
-ac 1 \
-ar 16000 \
-ab 16k \
-f rtp rtp://192.168.0.10:5002 > both.sdp


    


    the lag ramps up to 10 seconds and audio and video are out of sync. Does anybody know why ?

    


    I've tried UDP and TCP instead of RTP but then the lag is always high, even with audio/video only.

    


    Any suggestion is much appreciated.

    


    P.S. On the client side (MacOS) I'm receiving with

    


    ffplay -protocol_whitelist file,rtp,udp -i file.sdp