Recherche avancée

Médias (1)

Mot : - Tags -/intégration

Autres articles (112)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7572)

  • FFmpeg/C/C++ : where is located and how decode Closed Caption(608) in Matrox DVCPRO SD files using ffmpeg and c/c++ ?

    2 mars 2020, par Helmuth Schmitz

    I’m a lit bit crazy with some especific video files. I want to decode them using ffmpeg/c++, specifically decode 608 Closed Captions. The AVI file format/codec is DVCPRO at SD(720x480) resolution. The enconder is "Matrox DSX AVI file . Format : 6. Build : 1.0.0.451" That’s was ok, until moment i tried to see or decode 608 closed captions. There was nothing at line 21. So, i decid to play this file in Matrox Sample Video Player(an example of their SDK) and for my surprise there was 608 closed captions. So, i read Matrox SDK documentation, and found this

    Using closed caption information
    Some of the Matrox NTSC codecs can extract the closed caption information from the
    video buffers and store it as metadata in the compressed buffer. On decompression,
    these same codecs can restore the lines to their proper position. Other NTSC codecs
    just encode the closed caption lines as video data. The closed caption lines that are
    extracted are defined by the EIA-608 specification. The two lines are defined as lines
    21 from both the first and second fields.
    The following codecs keep the closed caption lines as metadata (NTSC only):
    • Matrox DV (encode and decode).
    • Matrox MPEG-2 I-frame (encode and decode).
    • Matrox MPEG-2 IBP, except 720×512 resolution (decode only).
    The following codecs keep the closed caption lines as video data (NTSC only):
    • Matrox D10 (encode and decode).
    • Matrox MPEG-2 IBP in 720 ×512 resolution (decode only).
    • Matrox M-JPEG (decode only).

    So, i thought that VBI or Closed Caption(608) will be stored at some metadata. But, exploring the entire file using ffmpeg/libavcodec with C/C++, i found nothing "hide" in metadata. Even in frames metadata. Is like magic. There is nothing in line 21 and nothing in metadata’s. So, how Matrox sample example is the only one video player that show’s it ?

  • Can javascript MSE play segmented mp4 from the middle ?

    10 juillet 2020, par Elias Wagnerberger

    in my current project i have a video stream that ffmpeg encodes to a segmented mp4. that encoded data is piped into an application that sends that data to whomever connects to that application through a websocket. when a client connects i make sure to send the ftyp and the moov boxes first and then send the most recent segments recieved from ffmpeg.

    


    on the client side i just pass all binary data from the websocket to MSE.

    


    The problem i am facing is that this works if the client is connected from the very start and gets all the fragments that ffmpeg pipes out, but it does not work if the client connects in after ffmpeg sends its first few fragments.

    


    My question is :
Is it possible for MSE to play a fragmented mp4 from the middle when it is also provided the init segments ?

    


    If it is possible then how would that need to be implemented ?

    


    if it isnt possible then what format would allow me to stream live video over a websocket ?

    


  • A Soundboard for Discord with Discord.py with already downloaded .mp3 files

    8 août 2019, par Kerberos Kev

    I’m setting up a discord bot with discord.py, and want to implement an soundboard.The soundboard should take already downloaded .mp3 files and play them in the discord. The bot already automatically joins and disconnects from a voice channel but does not play any of my sound files.

    I tried converting the mp3 file into an opus one and just play it without ffmpeg but this didn’t work either.

    import discord
    from discord.ext import commands
    from discord.utils import get
    from mutagen.mp3 import MP3
    import time
    import os

    client = commands.Bot(command_prefix='<')
    a = './'+'kurz-kacken.mp3'


    class Soundboard(commands.Cog):

       def __init__(self, client):
           self.client = self

       @commands.command(pass_context=True)
       async def soundboard(self, ctx, songname: str):
           channel = ctx.message.author.voice.channel
           voice = get(client.voice_clients, guild=ctx.guild)

           if voice and voice.is_connected():
               await voice.move_to(channel)
           else:
               voice = await channel.connect()

               print(f"The bot has connected to {channel}\n")

           # audio = MP3('./'+'kurz-kacken.mp3')
           # a = audio.info.length
           voice.play(discord.FFmpegPCMAudio('./'+'kurz-kacken.mp3'),
                      after=lambda e: print("Song done!"))
           voice.source = discord.PCMVolumeTransformer(voice.source)
           voice.source.volume = 0.07
           # time.sleep(a)
           await ctx.send(f'ended')

           if voice and voice.is_connected():
               await voice.disconnect()


    def setup(client):
       client.add_cog(Soundboard(client))