Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5523)

  • Seeking with ffmpeg options fails or causes delayed playback in Discord bot

    29 août 2022, par J Petersen

    My Discord bot allows users to play a song starting from a timestamp.

    


    The problem is that playback is delayed and audio plays faster and is jumbled if start times >= 30s are set.

    


    Results from testing different start times. Same URL, 30 second duration :

    


    





    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Entered Start Time (s) Playback Delay (s) Song Playback Time (s)
    0 3 30
    30 10 22
    60 17 17
    120 31 2
    150 120 <1

    &#xA;

    &#xA;

    I am setting the start time using ffmpeg_options as suggested in this question.

    &#xA;

    Does anyone understand why the audio playback is being delayed/jumbled ? How can I improve playback delay and allow users to start in the middle of a multi-chapter YouTube video ?

    &#xA;

    Code :

    &#xA;

    import discord&#xA;import youtube_dl&#xA;import asyncio&#xA;&#xA;&#xA;# Suppress noise about console usage from errors&#xA;youtube_dl.utils.bug_reports_message = lambda: ""&#xA;&#xA;&#xA;ytdl_format_options = {&#xA;    "format": "bestaudio/best",&#xA;    "outtmpl": "%(extractor)s-%(id)s-%(title)s.%(ext)s",&#xA;    "restrictfilenames": True,&#xA;    "noplaylist": False,&#xA;    "yesplaylist": True,&#xA;    "nocheckcertificate": True,&#xA;    "ignoreerrors": False,&#xA;    "logtostderr": False,&#xA;    "quiet": True,&#xA;    "no_warnings": True,&#xA;    "default_search": "auto",&#xA;    "source_address": "0.0.0.0",  # Bind to ipv4 since ipv6 addresses cause issues at certain times&#xA;}&#xA;&#xA;ytdl = youtube_dl.YoutubeDL(ytdl_format_options)&#xA;&#xA;&#xA;class YTDLSource(discord.PCMVolumeTransformer):&#xA;    def __init__(self, source: discord.AudioSource, *, data: dict, volume: float = 0.5):&#xA;        super().__init__(source, volume)&#xA;&#xA;        self.data = data&#xA;&#xA;        self.title = data.get("title")&#xA;        self.url = data.get("url")&#xA;&#xA;    @classmethod&#xA;    async def from_url(cls, url, *, loop=None, stream=False, timestamp = 0):&#xA;        ffmpeg_options = {&#xA;            "options": f"-vn -ss {timestamp}"}&#xA;&#xA;        loop = loop or asyncio.get_event_loop()&#xA;&#xA;        data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))&#xA;        if "entries" in data:&#xA;            # Takes the first item from a playlist&#xA;            data = data["entries"][0]&#xA;&#xA;        filename = data["url"] if stream else ytdl.prepare_filename(data)&#xA;        return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)&#xA;&#xA;&#xA;intents = discord.Intents.default()&#xA;&#xA;bot = discord.Bot(intents=intents)&#xA;&#xA;@bot.slash_command()&#xA;async def play(ctx, audio: discord.Option(), seconds: discord.Option(), timestamp: discord.Option()):&#xA;    channel = ctx.author.voice.channel&#xA;    voice = await channel.connect()&#xA;    player = await YTDLSource.from_url(audio, loop=bot.loop, stream=True, timestamp=int(timestamp))&#xA;    voice.play(player)&#xA;    await asyncio.sleep(int(seconds))&#xA;    await voice.disconnect()&#xA;&#xA;token = token value&#xA;bot.run(token)&#xA;

    &#xA;

  • ffmpeg : programmatically use libavcodec and encode and decode raw bitmap, all in just few milliseconds and small compressed size on Raspberry Pi 4

    15 mars 2023, par Jerry Switalski

    We need to compress the size of the 1024x2048 image we produce, to size of about jpeg (200-500kb) from raw 32bits RGBA (8Mb) on Raspberry Pi 4. All in c/c++ program.

    &#xA;

    The compression needs to be just in few milliseconds, otherwise it is pointless to us.

    &#xA;

    We decided to try supported encoding using ffmpeg dev library and c/c++ code.

    &#xA;

    The problem we are facing is that when we edited example of the encoding, provided by ffmpeg developers, the times we are dealing are unacceptable.

    &#xA;

    Here you can see the edited code where the frames are created :

    &#xA;

    for (i = 0; i &lt; 25; i&#x2B;&#x2B;)&#xA;{&#xA;#ifdef MEASURE_TIME&#xA;        auto start_time = std::chrono::high_resolution_clock::now();&#xA;        std::cout &lt;&lt; "START Encoding frame...\n";&#xA;#endif&#xA;    fflush(stdout);&#xA;&#xA;    ret = av_frame_make_writable(frame);&#xA;    if (ret &lt; 0)&#xA;        exit(1);&#xA;&#xA;    //I try here, to convert our 32 bits RGBA image to YUV pixel format:&#xA;&#xA;    for (y = 0; y &lt; c->height; y&#x2B;&#x2B;)&#xA;    {&#xA;        for (x = 0; x &lt; c->width; x&#x2B;&#x2B;)&#xA;        {&#xA;            int imageIndexY = y * frame->linesize[0] &#x2B; x;&#xA;&#xA;            uint32_t rgbPixel = ((uint32_t*)OutputDataImage)[imageIndexY];&#xA;&#xA;            double Y, U, V;&#xA;            uint8_t R = rgbPixel &lt;&lt; 24;&#xA;            uint8_t G = rgbPixel &lt;&lt; 16;&#xA;            uint8_t B = rgbPixel &lt;&lt; 8;&#xA;&#xA;            YUVfromRGB(Y, U, V, (double)R, (double)G, (double)B);&#xA;            frame->data[0][imageIndexY] = (uint8_t)Y;&#xA;&#xA;            if (y % 2 == 0 &amp;&amp; x % 2 == 0)&#xA;            {&#xA;                int imageIndexU = (y / 2) * frame->linesize[1] &#x2B; (x / 2);&#xA;                int imageIndexV = (y / 2) * frame->linesize[2] &#x2B; (x / 2);&#xA;&#xA;                frame->data[1][imageIndexU] = (uint8_t)U;&#xA;                frame->data[2][imageIndexV] = (uint8_t)Y;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    frame->pts = i;&#xA;&#xA;    /* encode the image */&#xA;    encode(c, frame, pkt, f);&#xA;&#xA;#ifdef MEASURE_TIME&#xA;        auto end_time = std::chrono::high_resolution_clock::now();&#xA;        auto time = end_time - start_time;&#xA;        std::cout &lt;&lt; "FINISHED Encoding frame in: " &lt;&lt; time / std::chrono::milliseconds(1) &lt;&lt; "ms.\n";&#xA;&#xA;#endif&#xA;    }&#xA;

    &#xA;

    Here are some important parts of the previous parts of that function :

    &#xA;

    codec_name = "mpeg4";&#xA;&#xA;codec = avcodec_find_encoder_by_name(codec_name);&#xA;&#xA;c = avcodec_alloc_context3(codec);&#xA;    &#xA;c->bit_rate = 1000000;  &#xA;c->width = IMAGE_WIDTH;&#xA;c->height = IMAGE_HEIGHT;&#xA;c->gop_size = 1;&#xA;c->max_b_frames = 1;&#xA;c->pix_fmt = AV_PIX_FMT_YUV420P;   &#xA;

    &#xA;

    IMAGE_WIDTH and IMAGE_HEIGHT are 1024 and 2048 corresponding.

    &#xA;

    The result I have ran on Raspberry Pi 4 look like this :

    &#xA;

    START Encoding frame...&#xA;Send frame   0&#xA;FINISHED Encoding frame in: 40ms.&#xA;START Encoding frame...&#xA;Send frame   1&#xA;Write packet   0 (size=11329)&#xA;FINISHED Encoding frame in: 60ms.&#xA;START Encoding frame...&#xA;Send frame   2&#xA;Write packet   1 (size=11329)&#xA;FINISHED Encoding frame in: 58ms.&#xA;

    &#xA;

    Since I am completely green in encoding and using codecs, my question will be how to do it the best way and correct way, meaning the way which would reduce timing to few ms, and I am not sure the codec was chosen the best for the job, or the pixel format.

    &#xA;

    The rest of the meaningful code you can see here (the encode() function you can find in the ffmpeg developer example I gave link to above) :

    &#xA;

    void RGBfromYUV(double&amp; R, double&amp; G, double&amp; B, double Y, double U, double V)&#xA;{&#xA;    Y -= 16;&#xA;    U -= 128;&#xA;    V -= 128;&#xA;    R = 1.164 * Y &#x2B; 1.596 * V;&#xA;    G = 1.164 * Y - 0.392 * U - 0.813 * V;&#xA;    B = 1.164 * Y &#x2B; 2.017 * U;&#xA;}&#xA;

    &#xA;

  • avcodec/svq1enc : Add SVQ1EncDSPContext, make codec context private

    10 octobre 2022, par Andreas Rheinhardt
    avcodec/svq1enc : Add SVQ1EncDSPContext, make codec context private
    

    Currently, SVQ1EncContext is defined in a header that is also
    included by the arch-specific code that initializes the one
    and only dsp function that this encoder uses directly.

    But the arch-specific functions to set this dsp function
    do not need anything from SVQ1EncContext. This commit therefore
    adds a small SVQ1EncDSPContext whose only member is said
    function pointer and renames svq1enc.h to svq1encdsp.h
    to avoid exposing unnecessary internals to these init
    functions (and the whole mpegvideo with it).

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/ppc/svq1enc_altivec.c
    • [DH] libavcodec/svq1enc.c
    • [DH] libavcodec/svq1enc.h
    • [DH] libavcodec/svq1encdsp.h
    • [DH] libavcodec/x86/svq1enc_init.c