Recherche avancée

Médias (91)

Autres articles (86)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (8912)

  • Streaming without Content-Length in response

    21 décembre 2023, par kain

    I'm using Node.js, Express (and connect), and fluent-ffmpeg.

    



    We want to stream audio files that are stored on Amazon S3 through http.

    



    We have all working, except that we would like to add a feature, the on-the-fly conversion of the stream through ffmpeg.

    



    This is working well, the problem is that some browsers checks in advance before actually getting the file.

    



    Incoming requests containing the Range header, for which we reply with a 206 with all the info from S3, have a fundamental problem : we need to know in advance the content-length of the file.

    



    We don't know that since it is going through ffmpeg.

    



    One solution might be to write out the resulting content-length directly on S3 when storing the file (in a special header), but this means we have to go through the pain of having queues to encode after upload just to know the size for future requests.
It also means that if we change compressor or preset we have to go through all this over again, so it is not a viable solution.

    



    We also noticed big differencies in the way Chrome and Safari request the audio tag src, but this may be discussion for another topic.

    



    Fact is that without a proper content-length header in response everything seems to break or browsers goes in an infinite loop or restart the stream at pleasure.

    



    Ideas ?

    


  • Video to image sequence export (preserving color range)

    20 août 2017, par Floating Point

    When I convert the material which is in legal range (tv) to prores, legal range is preserved, but when I export it to image sequence (tiff), I get files which are already in full range, although I try to preserver the original values with -vf "zscale=rin=limited:r=limited".

    Is ffmpeg automatically changes color range when exporting to images ?

  • Video concatenation using ffmpeg. Output video length is longer than inputs length

    28 décembre 2022, par Rostyslav Mosorov

    I have a problem with video mp4 concatenation using ffmpeg. I have two the same videos and I want to concatenate them. I created file inputs.txt and then used command :

    


    ffmpeg  -f concat -i inputs.txt -c copy mergedVideo.mp4

    


    Output :

    


    ffmpeg  -f concat -i inputs.txt -c copy mergedVideo.mp4
ffmpeg version 5.0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with clang version 12.0.1
  configuration: --prefix=/Users/runner/miniforge3/conda-bld/ffmpeg_1649114094397/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pl --cc=arm64-apple-darwin20.0.0-clang --disable-doc --disable-openssl --enable-demuxer=dash --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-neon --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-libvpx --enable-pic --enable-pthreads --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libmp3lame --pkg-config=/Users/runner/miniforge3/conda-bld/ffmpeg_1649114094397/_build_env/bin/pkg-config --enable-cross-compile --arch=arm64 --target-os=darwin --cross-prefix=arm64-apple-darwin20.0.0- --host-cc=/Users/runner/miniforge3/conda-bld/ffmpeg_1649114094397/_build_env/bin/x86_64-apple-darwin13.4.0-clang
  libavutil      57. 17.100 / 57. 17.100
  libavcodec     59. 18.100 / 59. 18.100
  libavformat    59. 16.100 / 59. 16.100
  libavdevice    59.  4.100 / 59.  4.100
  libavfilter     8. 24.100 /  8. 24.100
  libswscale      6.  4.100 /  6.  4.100
  libswresample   4.  3.100 /  4.  3.100
  libpostproc    56.  3.100 / 56.  3.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x1561044a0] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'inputs.txt':
  Duration: N/A, start: -0.064000, bitrate: 1424 kb/s
  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 1355 kb/s, 50 fps, 50 tbr, 12800 tbn
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 68 kb/s
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
File 'mergedVideo.mp4' already exists. Overwrite? [y/N] y
Output #0, mp4, to 'mergedVideo.mp4':
  Metadata:
    encoder         : Lavf59.16.100
  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 1355 kb/s, 50 fps, 50 tbr, 12800 tbn
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 68 kb/s
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame=    0 fps=0.0 q=-1.0 size=       0kB time=00:00:00.06 bitrate=   6.0k[mov,mp4,m4a,3gp,3g2,mj2 @ 0x154f28b20] Auto-inserting h264_mp4toannexb bitstream filter
frame=  166 fps=0.0 q=-1.0 Lsize=     583kB time=00:00:03.39 bitrate=1407.4kbits/s speed= 179x    
video:550kB audio:29kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.737394%


    


    Unfortunately the result is different then I expected :
    
movies comparison

    


    The output is longer than the length of two inputs. Any thoughts ?

    


    Also I tried using different commands :

    


    ffmpeg  -f concat -i inputs.txt -c copy mergedVideo.mp4


    


    or set the framerate :

    


    ffmpeg -r 50 -f concat -i inputs.txt -c copy mergedVideo.mp4


    


    What is also weird that I have the same problem when using moviepy library.

    


    from moviepy.editor import VideoFileClip, concatenate_videoclips
videos = [VideoFileClip("0.mp4"), VideoFileClip("0.mp4")]
concatenate_videoclips(videos).write_videofile('mergedVideo.mp4')


    


    Update
ffprobe command for output file

    


    ffprobe -v error -select_streams v -show_entries stream=codec_name,time_base,start_pts,start_time,duration -of default=noprint_wrappers=1 mergedVideo.mp4 
codec_name=h264
time_base=1/12800
start_pts=0
start_time=0.000000
duration=3.380000


    


    For input :

    


    ffprobe -v error -select_streams v -show_entries stream=codec_name,time_base,start_pts,start_time,duration -of default=noprint_wrappers=1 0.mp4          
codec_name=h264
time_base=1/12800
start_pts=0
start_time=0.000000
duration=1.660000


    


    The duration supposed to be 2*1.66 = 3.32 and but it's 3.38.

    


    Content of the inputs.txt

    


    file '0.mp4'
file '0.mp4'