Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (31)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6207)

  • Multiple trims to a video using ffmpeg generating video with shorter duration than expected [closed]

    9 septembre 2024, par Gerardo

    I have an application that given a video it trims multiple parts of that video using ffmpeg. Each part is cropped, scaled and then concatenated to generate a single video.

    


    To share an example, I have a video of 1 minute and 44 seconds of duration and 60 fps. My goal is to trim 3 parts of the video :

    


      

    • First one between seconds 0 to 44.666
    • 


    • Second one between seconds 44.666 to 74.349
    • 


    • Third one between seconds 74.349 to 103.985
    • 


    


    The ffmpeg command I use to achieve that is the following one :

    


    ffmpeg -y -hide_banner -i bg_720_1280.png -i error.mp4 -filter_complex "
[1:v]trim=0.0:44.666,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_1_0_v];
[1:a]atrim=0.0:44.666,volume=1.0,asetpts=PTS-STARTPTS[crop_1_0_a];
[0:v][crop_1_0_v]overlay=enable='between(t,0,44.666)':x=0.0:y=0.0[crop_1_0_v];
[1:v]trim=44.666:74.349,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_2_0_v];
[1:a]atrim=44.666:74.349,volume=1.0,asetpts=PTS-STARTPTS[crop_2_0_a];
[0:v][crop_2_0_v]overlay=enable='between(t,0,29.683)':x=0.0:y=0.0[crop_2_0_v];
[1:v]trim=74.349:103.985,setpts=PTS-STARTPTS,crop=405.0:720.0:437.5:0.0,scale=-2:1280.0[crop_3_0_v];
[1:a]atrim=74.349:103.985,volume=1.0,asetpts=PTS-STARTPTS[crop_3_0_a];
[0:v][crop_3_0_v]overlay=enable='between(t,0,29.636)':x=0.0:y=0.0[crop_3_0_v];
[crop_1_0_a][crop_2_0_a][crop_3_0_a]concat=n=3:v=0:a=1[a];
[crop_1_0_v][crop_2_0_v][crop_3_0_v]concat=n=3:v=1:a=0[outv];
[a]amix=1:duration=longest[outa]" -map "[outv]" -map "[outa]" -vcodec libx264 -acodec aac -sws_flags lanczos -pix_fmt yuv420p -crf 17 -preset superfast -r 60 test.mp4


    


    Running this command it generates a video of 11 seconds of duration and I'm unable to understand it. What is wrong with the command ? Also I'm open to recommendations of the ffmpeg command in case you find another way more efficient or performant.

    


    I'm using the following FFMPEG version :

    


    ffmpeg version 7.0.2 Copyright (c) 2000-2024 the FFmpeg developers
  built with Apple clang version 15.0.0 (clang-1500.3.9.4)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/7.0.2 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox
  libavutil      59.  8.100 / 59.  8.100
  libavcodec     61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample   5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100


    


    But I got the same issue with static ffmpeg builds

    


    The file bg_720_1280.png is just a transparent image of resolution 720x1280. I think I could achieve the same by using nullsrc filter with that resolution instead of using this background image.

    


  • The problem with pydub.AudioSegment.from_file (ffmpeg)

    7 septembre 2024, par akkoolda

    I'm trying to make a normal voice recording discord. When I try to convert an Audio Data object, an error occurs. I've already tried everything I can, I can't solve the problem.

    


    Here is the code, the error occurs in the seg variable :

    


    async def once_done(sink: discord.sinks.MP3Sink, channel: discord.TextChannel, *args):
        words_list = []
        audio_segs: list[pydub.AudioSegment] = []
        longest = pydub.AudioSegment.empty()
        files: list[discord.File] = []

        for user_id, audio in sink.audio_data.items():
            try:
                payload: FileSource = {
                    "buffer": audio.file.read(),
                    "mimetype": "audio/mp3"  # Указываем тип аудиофайла
            }

                #audio.on_format("mp3")
                seg = pydub.AudioSegment.from_file(audio.file, format="mp3")


    


    The error itself

    


    Exception in thread Thread-3 (recv_audio):
    Traceback (most recent call last):
      File "c:\Users\olimp\OneDrive\Рабочий стол\Work on Python\management-followups-bot\main.py", line 66, in once_done
        seg = pydub.AudioSegment.from_file(audio.file, format="mp3")
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\site-packages\pydub\audio_segment.py", line 773, in from_file
        raise CouldntDecodeError(
    pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 3199971767
    
    Output from ffmpeg/avlib:
    
    ffmpeg version 2024-09-02-git-3f9ca51015-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
      built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
      configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 
    --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libquirc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
      libavutil      59. 35.100 / 59. 35.100
      libavcodec     61. 11.100 / 61. 11.100
      libavformat    61.  5.101 / 61.  5.101
      libavdevice    61.  2.100 / 61.  2.100
      libavfilter    10.  2.102 / 10.  2.102
      libswscale      8.  2.100 /  8.  2.100
      libswresample   5.  2.100 /  5.  2.100
      libpostproc    58.  2.100 / 58.  2.100
    [cache @ 000001b25c1c6e00] Inner protocol failed to seekback end : -40
        Last message repeated 1 times
    [mp3 @ 000001b25c1c6840] Failed to find two consecutive MPEG audio frames.
    [cache @ 000001b25c1c6e00] Statistics, cache hits:0 cache misses:0
    [in#0 @ 000001b25c1ac740] Error opening input: Invalid data found when processing input
    Error opening input file cache:pipe:0.
    Error opening input files: Invalid data found when processing input
    
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
        self.run()
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
        self._target(*self._args, **self._kwargs)
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\voice_client.py", line 868, in recv_audio
        result = callback.result()
                 ^^^^^^^^^^^^^^^^^
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 456, in result
        return self.__get_result()
               ^^^^^^^^^^^^^^^^^^^
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
        raise self._exception
      File "c:\Users\olimp\OneDrive\Рабочий стол\Work on Python\management-followups-bot\main.py", line 95, in once_done
        await channel.send(f"Ошибка при работе с Deepgram API: {str(e)}")
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\abc.py", line 1666, in send
        data = await state.http.send_message(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\olimp\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\http.py", line 374, in request
        raise HTTPException(response, data)
    discord.errors.HTTPException: 400 Bad Request (error code: 50035): Invalid Form Body
    In content: Must be 2000 or fewer in length.

Solving the error, how else can you normally record user audio


    


    How can I fix this, or maybe there are some other options ?

    


  • How to reinsert edited metadata stream information from the FFMETADATAFILE file ? [closed]

    6 septembre 2024, par SENYCH

    I'm working on simplifying and speeding up the process of editing video metadata for user convenience. I've successfully edited metadata streams using console commands, such as :

    


    ffmpeg -i INPUT.mp4 -map 0 -metadata:s:0 "handler_name=An other video" -metadata:s:1 "handler_name=An other audio recording in russian" -metadata:s:2 "handler_name=An other audio recording in english" -metadata:s:3 "handler_name=An other audio recording in japanese" -c copy OUTPUT.mp4


    


    However, I'd like to accomplish this through a ffmetadata file. Here's the approach I've taken :

    


    ffmpeg -t 0 -i INPUT.mp4 -map 0 -c copy -f ffmetadata ffmetadata.txt -hide_banner


    


    Original ffmetadata.txt is :

    


    ;FFMETADATA1
major_brand=isom
minor_version=512
compatible_brands=isomiso2avc1mp41
encoder=Lavf61.5.101
[STREAM]
language=und
handler_name=The best video
vendor_id=[0][0][0][0]
[STREAM]
language=rus
handler_name=The best russian language
vendor_id=[0][0][0][0]
[STREAM]
language=eng
handler_name=The best english language
vendor_id=[0][0][0][0]
[STREAM]
language=jpn
handler_name=The best japanese language
vendor_id=[0][0][0][0]


    


    Edit the ffmetadata.txt file to update the handler_name values :

    


    ;FFMETADATA1
major_brand=isom
minor_version=512
compatible_brands=isomiso2avc1mp41
encoder=Lavf61.5.101
[STREAM]
language=und
handler_name=An other video
vendor_id=[0][0][0][0]
[STREAM]
language=rus
handler_name=An other audio recording in russian
vendor_id=[0][0][0][0]
[STREAM]
language=eng
handler_name=An other audio recording in english
vendor_id=[0][0][0][0]
[STREAM]
language=jpn
handler_name=An other audio recording in japanese
vendor_id=[0][0][0][0]


    


    Attempt to apply the updated metadata from ffmetadata2.txt :

    


    C:\Users\Alexander\Videos>ffmpeg -i INPUT.mp4 -i ffmetadata2.txt -map 0:v -map 0:a -map_metadata 1 -c copy OUTPUT2.mp4 -hide_banner


    


    Despite these steps, I've noticed that only the global metadata is updated, while the metadata for each stream remains unchanged. The console output shows that metadata for each stream is not updated as expected.

    


    What am I missing ? How can I ensure that the stream-specific metadata is also updated correctly when using a ffmetadata file ?

    


    Additional Information :

    


      

    • FFmpeg version : 2024-08-26-git-98610fe95f-full_build
    • 


    • The ffmetadata file format and the approach I've used should be correct according to the FFmpeg documentation.
    • 


    


    I would greatly appreciate any recommendations or suggestions on how to solve this problem !

    


    I found a bad solution for my problem, but it still isn't ideal as it requires specifying -map_metadata:s:N 1:s:N for each stream individually, which is quite cumbersome. Is there a way to simplify this process and avoid having to set metadata for each stream separately ?

    


    The command I’m using is :

    


    C:\Users\Alexander\Videos>ffmpeg -i INPUT.mp4 -i ffmetadata2.txt -map 0 -map_metadata:s:0 1:s:0 -map_metadata:s:1 1:s:1 -map_metadata:s:2 1:s:2 -map_metadata:s:3 1:s:3 -c copy OUTPUT2.mp4 -hide_banner


    


    This works, but having to specify -map_metadata:s:N for each stream creates extra work, especially as the number of streams increases. Is there a more efficient way to handle this ?