Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6106)

  • Need help configuring FFMPEG to work with a webcams h264 stream

    9 août 2020, par The Welsh Dragon

    I have been trying to get a H264 stream from a H264 usb webcam working but I am not making much progress so I'm hoping someone knows FFMPEG better than me !

    


    There are dozens of questions/answers on SO but none solve my problem.

    


    In short, I get a very pixelated (or sometimes mostly green) screen. I am using VLC to test the stream which is coming via an RTSP server. I am using FFMPEG to copy the webcam stream to the local RTSP server.

    


    The webcam also supports YUYV which I can get working - it is just the h264 stream causing me problems.

    


    So this is how the device is presented :

    


    H264 USB Camera: USB Camera (usb-20980000.usb-1):
        /dev/video0
        /dev/video1
        /dev/video2
        /dev/video3


    


    /dev/video0 is the YUYV and MPEG stream
/dev/video2 is the h264 stream that has the following capabilities :

    


    ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'H264' (H.264, compressed)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x360
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 352x288
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)


    


    I have tried various resolutions, the smaller giving slightly less pixelated images but none are usable and definitely dont compare to the YUYV high resolution results.

    


    This (YUYV) command works :

    


    ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M -an -f rtsp rtsp://localhost:80/live/stream


    


    These two h264 options dont work :

    


    ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -an -f rtsp rtsp://localhost:80/live/stream


    


    ffmpeg -re -i /dev/video2 -video_size 800x600 -framerate 15 -pix_fmt yuv420p -tune zerolatency -c:v copy -an -f rtsp rtsp://localhost:80/live/stream


    


    For that last command the FFMPEG output looks like this :

    


    ffmpeg version git-2020-08-07-6fdf3cc Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 8 (Raspbian 8.3.0-6+rpi1)
  configuration: --extra-ldflags=-latomic --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree --enable-libfreetype --enable-libx264 --enable-libmp3lame --enable-mmal --enable-indev=alsa --enable-outdev=alsa
  libavutil      56. 58.100 / 56. 58.100
  libavcodec     58.100.100 / 58.100.100
  libavformat    58. 50.100 / 58. 50.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter     7. 87.100 /  7. 87.100
  libswscale      5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
Input #0, video4linux2,v4l2, from '/dev/video2':
  Duration: N/A, start: 1353.265049, bitrate: N/A
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 2000k tbc
[udp @ 0x38c29f0] attempted to set receive buffer to size 393216 but it only ended up set as 360448
[udp @ 0x38d7b50] attempted to set receive buffer to size 393216 but it only ended up set as 360448
Output #0, rtsp, to 'rtsp://localhost:80/live/stream':
  Metadata:
    encoder         : Lavf58.50.100
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, q=2-31, 30 fps, 30 tbr, 90k tbn, 1000k tbc
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[rtsp @ 0x38fd890] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[rtsp @ 0x38fd890] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file.
frame=  348 fps= 18 q=-1.0 size=N/A time=00:00:21.03 bitrate=N/A speed=1.09x


    


    The issue looks like it is bandwidth related or the lack of processing power in the device being used BUT the YUYV works at a high resolution and (taking a completely different approach i.e. not using FFMPEG) I can get a very decent MPEG stream working on the same device.

    


    So any FFMPEG experts out there who can help me with getting the correct parameters for a h264 stream ?

    


  • ffmpeg : stream copy from .mxf into NLE-compatible format

    9 juin 2013, par David

    Because my NLE software does not support the .mxf-files from Canon XF100 I need to convert them into a supported format.

    As far as I know, mxf-files are just another container format for mpeg2 streams, so it would be really nice to extract the streams and place them into another container (without reencoding).

    I think ffmpeg can do this – correct me if I'm wrong – by running the following command :

    ffmpeg -i in.mxf -vcodec copy out.m2ts (or .ts, .mts, ...)

    ffmpeg finishes without errors after about 2 seconds (in.mxf is abut 170mb) :

    c:\video>c:\ffmpeg\bin\ffmpeg -i in.MXF -vcodec copy out.m2ts
    ffmpeg version N-53680-g0ab9362 Copyright (c) 2000-2013 the FFmpeg developers
     built on May 30 2013 12:14:03 with gcc 4.7.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
    e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
    ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
    ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
    eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
    amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
    enable-libxvid --enable-zlib
     libavutil      52. 34.100 / 52. 34.100
     libavcodec     55. 12.102 / 55. 12.102
     libavformat    55.  8.100 / 55.  8.100
     libavdevice    55.  2.100 / 55.  2.100
     libavfilter     3. 73.100 /  3. 73.100
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Guessed Channel Layout for  Input Stream #0.1 : mono
    Guessed Channel Layout for  Input Stream #0.2 : mono
    Input #0, mxf, from 'in.MXF':
     Metadata:
       uid             : 1bb23c97-6205-4800-80a2-e00002244ba7
       generation_uid  : 1bb23c97-6205-4800-8122-e00002244ba7
       company_name    : CANON
       product_name    : XF100
       product_version : 1.00
       product_uid     : 060e2b34-0401-010d-0e15-005658460100
       modification_date: 2013-01-06 11:05:02
       timecode        : 01:42:14:22
     Duration: 00:00:28.32, start: 0.000000, bitrate: 51811 kb/s
       Stream #0:0: Video: mpeg2video (4:2:2), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9
    ], 25 fps, 25 tbr, 25 tbn, 50 tbc
       Stream #0:1: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
       Stream #0:2: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
    Output #0, mpegts, to 'out.m2ts':
     Metadata:
       uid             : 1bb23c97-6205-4800-80a2-e00002244ba7
       generation_uid  : 1bb23c97-6205-4800-8122-e00002244ba7
       company_name    : CANON
       product_name    : XF100
       product_version : 1.00
       product_uid     : 060e2b34-0401-010d-0e15-005658460100
       modification_date: 2013-01-06 11:05:02
       timecode        : 01:42:14:22
       encoder         : Lavf55.8.100
       Stream #0:0: Video: mpeg2video, yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-3
    1, 25 fps, 90k tbn, 25 tbc
       Stream #0:1: Audio: mp2, 48000 Hz, mono, s16, 128 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (pcm_s16le -> mp2)
    Press [q] to stop, [?] for help
    frame=  532 fps=0.0 q=-1.0 size=  143511kB time=00:00:21.25 bitrate=55314.1kbits
    frame=  561 fps=435 q=-1.0 size=  151254kB time=00:00:22.42 bitrate=55242.0kbits
    frame=  586 fps=314 q=-1.0 size=  158021kB time=00:00:23.41 bitrate=55288.0kbits
    frame=  609 fps=255 q=-1.0 size=  164182kB time=00:00:24.34 bitrate=55235.4kbits
    frame=  636 fps=217 q=-1.0 size=  171463kB time=00:00:25.42 bitrate=55235.1kbits
    frame=  669 fps=194 q=-1.0 size=  180133kB time=00:00:26.72 bitrate=55226.3kbits
    frame=  699 fps=173 q=-1.0 size=  188326kB time=00:00:27.92 bitrate=55256.6kbits
    frame=  708 fps=169 q=-1.0 Lsize=  190877kB time=00:00:28.30 bitrate=55233.6kbit
    s/s
    video:172852kB audio:442kB subtitle:0 global headers:0kB muxing overhead 10.1461
    18%

    Unfortunately the output file turns out to be displayed correctly only by vlc player.
    My NLE-software (Cyberlink Power Director) is able to open the file but most of the picture is green. Only a few pixels on the left edge show the original video :

    output file

    Any ideas how to solve that problem ? Is there a better way to use .mxf-files in NLE-software without native support ?

    thanks in advance

  • How to batch process a series of video files with powershell and other-transcode/ffmpeg

    7 juin 2022, par DarkDiamond

    TL ;DR

    


    What did I do wrong in the following PowerShell-Script ? It does not work as expected.

    



    


    I am recording some of my lectures in my university with a photo camera. This works pretty well although I have to split the single lecture into three to four parts because the camera can only record 29 minutes of video in one take. I know that this is a common issue related to some licensensing problem that most photo cameras simply don't have the right license to record longer videos. But it confronts me with the problem that I later have to edit the files together after I did some post processing on them.

    


    With the camera I produce up to four video files with sizes around 3.5 GB which is way to big in order to be of any use because our IT department understandably doesn't want to host so much data, as I produce around 22 GB of video material each week.

    


    Some time ago I came across a very useful tool called "other-video-transcoding" by Don Melton over on GitHub, written in ruby, that allows me to compress the files to a reasonable file size without any visual loss. In addition I crop the videos to remove the part of each frame that is neither the board nor a place where my professor stands in order to decrease the filesize even further and do some privacy protection by cutting out most of the students.

    


    As the tools are accessable via the command line, it is relatively easy to configure and does not cost additional computational power to render a nice gui, so I can edit one of the 29 minute clips in less than 10 minutes.

    


    Now I wanted to optimize my workflow by writing a PowerShell script that only takes the parameters what to crop and which files to work on and then does the rest on its own so I can just start the script and then do something else while my laptop renders the new files.

    


    So far I have the following :

    


    $video_path = Get-ChildItem ..\ -Directory | findstr "SoSe"

Get-ChildItem $video_path -name | findstr ".MP4" | Out-File temp.txt -Append 
Get-Content temp.txt | ForEach-Object {"file " + $_} >> .\files.txt

Get-ChildItem $video_path |
Foreach-Object {
other-transcode --hevc --mp4 --target 3000 --crop 1920:780:0:0 $_.FullName
}

#other-transcode --hevc --mp4 --crop 1920:720:60:0 ..\SoSe22_Theo1_videos_v14_RAW\
ffmpeg -f concat -i files.txt -c copy merged.mp4
Remove-Item .\temp.txt


    


    but it does not quite do what I it expect to do.
This is my file system :

    


    sciebo/
└── SoSe22_Theo1_videos/
    ├── SoSe22_Theo1_videos_v16/
    │   ├── SoSe22_Theo1_videos_v16_KOMPR/
    │   │   ├── C0001.mp4
    │   │   ├── C0002.mp4
    │   │   ├── C0003.mp4
    │   │   ├── C0004.mp4
    │   │   ├── temp.txt
    │   │   ├── files.txt
    │   │   └── merged.mp4
    │   └── SoSe22_Theo1_videos_v16_RAW/
    │       ├── C0001.mp4
    │       ├── C0002.mp4
    │       ├── C0003.mp4
    │       └── C0004.mp4
    └── SoSe22_Theo1_videos_v17/
        ├── SoSe22_Theo1_videos_v17_KOMPR
        └── SoSe22_Theo1_videos_v17_RAW/
            ├── C0006.mp4
            ├── C0007.mp4
            ├── C0008.mp4
            └── C0009.mp4


    


    where the 16th lecture is already processed and the 17th is not. I always have the raw video data in the folders ending on RAW and the edited/compressed output files in the one ending on KOMPR. Note that the video files in the KOMPR folder are the output files of the other-transcode tool.

    


    The real work happens in the line where it says

    


    other-transcode --hevc --mp4 --target 3000 --crop 1920:780:0:0 $_.FullName


    


    and in the line

    


    ffmpeg -f concat -i files.txt -c copy merged.mp4


    


    where I concat the output files into the final version I can upload to our online learning platform.
What is wrong with my script ? In the end I'd like to pass the --crop parameter just to my script, but that is not the primary problem.

    



    


    A little information on the transcoding script so you don't have to look into the documentation :
    
As the last argument the tool takes the location of the video files to work on, be it relative or absolute file paths. The output is placed in the folder the script is called in, so if I cd into one of the KOMPR directories and then call

    


    other-transcode --mp4 ../SoSe22_Theo1_videos_v16_RAW/C0001.mp4


    


    a new file C0001.mp4 is created in the KOMPR directory and the transcoded video and old audio are written to that new video file.