Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (30)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6497)

  • ffmeg, missing size : Could not find codec parameters for stream 0 (Video : exr, gbrapf32le)

    13 septembre 2022, par RobertSmith

    I’m getting an error message using ffmpeg on Windows (a newer version of ffmpeg from 2022) using exr files. I'm trying to convert the exr to a video. However, I can get the error with just the command below :

    


    ffmpeg.exe -analyzeduration 9223372036854775807 -probesize 9223372036854775807 -i test_501_030_075_cg_testing_v001.1001.exr


    


    Below is the error :

    


    [exr @ 0000026319944c00] Unsupported channel VRayVelocity.Y.
[exr @ 0000026319944c00] Unsupported channel VRayVelocity.Z.
[exr @ 0000026319944c00] Multiple channels with index 2.
[exr @ 0000026319944c00] Wrong or missing size information.
[exr_pipe @ 0000026319932ac0] Could not find codec parameters for stream 0 (Video: exr, gbrapf32le): unspecified size
Consider increasing the value for the 'analyzeduration' (9223372036854775807) and 'probesize' (9223372036854775807) options
Input #0, exr_pipe, from 'X: Test_501_030_075_cg_testing_v001.1001.exr':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: exr, gbrapf32le, 25 fps, 25 tbr, 25 tbn


    


    When converting to a video, I also got :

    


    2022-09-12 10:39:36:  0: [buffer @ 000001402775ee80] Unable to parse option value "0x0" as image size
2022-09-12 10:39:36:  0: [buffer @ 000001402775ee80] Error setting option video_size to value 0x0.


    


    I'm not sure what's wrong ? I can open the EXR itself. We've also used ffmpeg fine on a bunch of other exrs (which also got those errors about unsupported channels like VRayVelocity, but ffmpeg worked just fine).
I believe I have set analyzeduration and probsize to the highest values allowed ( MaxInt64).

    


    Anyone have any ideas/suggestions ? Thanks !

    


    P.S. If people need more info here, please let me know what info specifically might be helpful. I have done research on this question (setting analyzeduration to highest setting for example) I'm posting here because I'm lost as what to try next and wondering if people have had similar issues

    


  • ffmpeg doesnt use all the pictures when creating a video

    9 septembre 2022, par Mikhael Karabas

    I have 75 pictures of the same size for an animation. named 0.png ... 74.png
when running ffmpeg to create a video out of them with 24 fps (commmand and log below) the resulting video instead of expected 75/24 = 3.125 sec. is 2.667 sec in lenght and consists only of first 64 frames(pictures), although ffmpeg tells it has processed 75 frames.
I have checked with

    


    ffmpeg -i output.webm out%%d.png - on the resulting video, it indeed exports 64 first frames and not the rest 11 of them.

    


    Cant undertand what am i doing wrong. please kindly advise.

    


    brief output below.

    


    complete log : https://drive.google.com/file/d/1_J7wLPU9PJZ7jztpiJ8g_bZKPZfiK02L/view?usp=sharing

    


    D:\ffmpeg\ffmpeg-64.exe -report -framerate 24 -f image2 -i %01d.png -c:v libvpx-vp9 -pix_fmt yuva420p -crf 10 -b:v 0 output.webm
ffmpeg started on 2022-09-09 at 19:03:15
Report written to "ffmpeg-20220909-190315.log"
Log level: 48
ffmpeg version 2021-12-17-git-b780b6db64-essentials_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 11.2.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
  libavutil      57. 11.100 / 57. 11.100
  libavcodec     59. 14.100 / 59. 14.100
  libavformat    59. 10.100 / 59. 10.100
  libavdevice    59.  0.101 / 59.  0.101
  libavfilter     8. 20.100 /  8. 20.100
  libswscale      6.  1.101 /  6.  1.101
  libswresample   4.  0.100 /  4.  0.100
  libpostproc    56.  0.100 / 56.  0.100
Input #0, image2, from '%01d.png':
  Duration: 00:00:03.13, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: png, rgba(pc), 300x400, 24 fps, 24 tbr, 24 tbn
File 'output.webm' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> vp9 (libvpx-vp9))
Press [q] to stop, [?] for help
[libvpx-vp9 @ 000002dad505c8c0] v1.11.0-62-g7f45e94d9
Output #0, webm, to 'output.webm':
  Metadata:
    encoder         : Lavf59.10.100
  Stream #0:0: Video: vp9, yuva420p(tv, progressive), 300x400, q=2-31, 24 fps, 1k tbn
    Metadata:
      encoder         : Lavc59.14.100 libvpx-vp9
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame=   75 fps=9.9 q=0.0 Lsize=    1056kB time=00:00:02.58 bitrate=3347.2kbits/s speed=0.342x
video:1036kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.942318%


    


  • FFmpeg C API HLS real time timestamps

    7 septembre 2022, par Robin

    I'm using the FFmpeg C API in C++ to read from a HLS stream. I need to know the real time of each AVPacket. I can extract the pts using AVPacket::pts but that is relative to the start of the stream.

    


    This is how the .m3u8 file looks :

    


    #EXTM3U
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:NO
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0

#EXT-X-PROGRAM-DATE-TIME:2022-09-07T14:01:56.612+02:00
#EXTINF:10.322561783,
1662552116.ts
#EXT-X-PROGRAM-DATE-TIME:2022-09-07T14:02:06.935+02:00
#EXTINF:10.320075936,
1662552126.ts

...


    


    The .m3u8 file contains an accurate EXT-X-PROGRAM-DATE-TIME, but how can I extract the one of the currently playing segment ?

    


    Alternatively, the file name of each .ts file is the unix timestamp in seconds. Can I extract that somehow ?

    


    If none of those are possibly, is it possible to control the exact number of preloaded segments ? I know the (approximate) segment length is 10 seconds so I could just do the following when receiving the first AVPacket :

    


    start_time = current_time - segment_count * segment_length`
start_pts = first_av_packet.pts


    


    And then to get the time of a later AVPacket, I could do :

    


    packet_time = start_time + new_packet.pts - start_pts


    


    This wouldn't give the same accuracy since the segments are not exactly the same length, but that is okay.