Recherche avancée

Médias (91)

Autres articles (107)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (5557)

  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.

  • Making my Discord Bot automatically play music from WAV on loop

    5 décembre 2022, par Mativ9

    So I was trying to make a Discord Bot in Python, which would atomatically join a voice channel and play my own music from a list in a loop. So far it's joining the channel, shuffling the list so the music is on random, but when I try to write a code so after one song it will play the next one it crushes and doesn't play anything (tho it's joining the channel)

    


    import discord
import random
from discord.ext import commands
from discord import FFmpegPCMAudio

#playlist as a list
queue = [FFmpegPCMAudio('Iceland1.wav'), FFmpegPCMAudio('Iceland2.wav'), FFmpegPCMAudio('Iceland3.wav'), FFmpegPCMAudio('Iceland4.wav'),
         FFmpegPCMAudio('Iceland5.wav'), FFmpegPCMAudio('Iceland6.wav'), FFmpegPCMAudio('Iceland7.wav'), FFmpegPCMAudio('Iceland8.wav'),
         FFmpegPCMAudio('Iceland9.wav'), FFmpegPCMAudio('Iceland10.wav'), FFmpegPCMAudio('Norway1.wav'), FFmpegPCMAudio('Norway2.wav'),
         FFmpegPCMAudio('Norway3.wav'), FFmpegPCMAudio('Norway4.wav'), FFmpegPCMAudio('Norway5.wav'), FFmpegPCMAudio('Norway6.wav'),
         FFmpegPCMAudio('Norway7.wav'), FFmpegPCMAudio('Norway8.wav'), FFmpegPCMAudio('Norway9.wav'), FFmpegPCMAudio('Norway10.wav'),
         FFmpegPCMAudio('Norway11.wav'), FFmpegPCMAudio('Presents1.wav'), FFmpegPCMAudio('Presents2.wav'), FFmpegPCMAudio('Presents3.wav'),
         FFmpegPCMAudio('Presents4.wav'), FFmpegPCMAudio('Presents5.wav'), FFmpegPCMAudio('Presents6.wav'), FFmpegPCMAudio('Presents7.wav'),
         FFmpegPCMAudio('Presents8.wav'), FFmpegPCMAudio('Presents9.wav'), FFmpegPCMAudio('Presents10.wav'), FFmpegPCMAudio('Autumn1.wav'),
         FFmpegPCMAudio('Autumn2.wav'), FFmpegPCMAudio('Autumn3.wav'), FFmpegPCMAudio('Autumn4.wav'), FFmpegPCMAudio('Autumn5.wav'),
         FFmpegPCMAudio('Autumn6.wav'), FFmpegPCMAudio('Autumn7.wav'), FFmpegPCMAudio('Autumn8.wav'), FFmpegPCMAudio('Covers1.wav'),
         FFmpegPCMAudio('Covers2.wav'), FFmpegPCMAudio('Covers3.wav'), FFmpegPCMAudio('Covers4.wav'), FFmpegPCMAudio('Covers5.wav'),
         FFmpegPCMAudio('Covers6.wav'), FFmpegPCMAudio('Covers7.wav'), FFmpegPCMAudio('Covers8.wav'), FFmpegPCMAudio('Covers9.wav'),
         FFmpegPCMAudio('Covers10.wav'), FFmpegPCMAudio('Covers11.wav'), FFmpegPCMAudio('Covers12.wav')]

intents = discord.Intents.default()
intents.message_content = True
client = commands.Bot(command_prefix='>', intents=intents)

@client.event
async def on_ready():
    global voice
    print("The Matt Bot is ready")
    print("--------------------------")
    await client.change_presence(activity=discord.Game('Matt Krupa')) #makes my bot play Matt Krupa
    channel = client.get_channel(thechannelid) #geting channel ID
    voice = await channel.connect() #connecting to channel
    random.shuffle(queue) #randomazing the playlist
    def after_song(): #moving the first song to the end so its on loop, and playling the next one
        queue.append(queue[0])
        del queue[0]
        player = await voice.play(queue[0], after=await after_song())
    player = await voice.play(queue[0], after=await after_song()) #plays song from the playlist, after the song doing the after_song() function

client.run(mytokenidontwanttoshowitsry)


    


    I wanted it to play all the songs on the infinite loop, i can't find how to correctly detect the end of a song...

    


  • C++ h264 ffmpeg/libav encode/decode(lossless) issues

    1er février 2017, par MrSmith

    Insights to encode/decode video with ffmpeg h264 (lossless)

    So I got something working on the encoding part, encode an avi in 264 however VLC wont play it, however Totem will.
    Decoding the same file proves troublesome. (I want the exact same data/frame going in as going out), I get these ;

    saving frame   5
    Video decoding
    [h264 @ 0x1d19880] decode_slice_header error
    frame :6
    saving frame   6
    Video decoding
    [h264 @ 0x1d19880] error while decoding MB 15 7, bytestream -27
    [h264 @ 0x1d19880] concealing 194 DC, 194 AC, 194 MV errors in I frame
    frame :7
    saving frame   7
    Video decoding
    [h264 @ 0x1d19880] decode_slice_header error

    and ultimatly this

    [H264 Decoder @ 0x7f1320766040] frame :11
    Broken frame packetizing
    [h264 @ 0x1d19880] SPS changed in the middle of the frame
    [h264 @ 0x1d19880] decode_slice_header error
    [h264 @ 0x1d19880] no frame!
    Error while decoding frame 11

    GAME OVER.

    Now I suspect that I have to go back to 1. the encoding part, there is problary a good reason VLC wont play it !

    I encode like this.

    void encode(char *Y,char *U,char *V){
    av_init_packet(&pkt);
    pkt.data = NULL;    // packet data will be allocated by the encoder
    pkt.size = 0;
    fflush(stdout);

    frame->data[0] = (uint8_t*)Y;
    frame->data[1] = (uint8_t*)U;
    frame->data[2] = (uint8_t*)V;
    frame->pts = ++i;

    ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
    if (ret < 0) {
       fprintf(stderr, "Error encoding frame\n");
       exit (EXIT_FAILURE);
    }
    if (got_output) {
       printf("Write frame %3d (size=%5d)\n", i, pkt.size);
       fwrite(pkt.data, 1, pkt.size, f);
       av_free_packet(&pkt);
    }
    }

    And the codec is setup like this :

    AVCodecID dasd = AV_CODEC_ID_H264;
    codec = avcodec_find_encoder(dasd);
    c = avcodec_alloc_context3(codec);
    c->bit_rate = 400000;
    c->width = 320;
    c->height = 240;
    c->time_base= (AVRational){1,25};
    c->gop_size = 10;
    c->max_b_frames=1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    av_opt_set(c->priv_data, "preset", "slow", 0);
    avcodec_open2(c, codec, NULL);

    Since I am going for lossless i am not dealing with delayed frames(is this a correct assumption ?)
    I may not actually be encoding lossless, it seems like I may have to go with something like

    AVDictionary *param;
    av_dict_set(&param, "qp", "0", 0);

    And then open...

    So I guess me questions is these :

    • What are the correct codec params for lossless encoding (and advice if h264 is a terrible idea in this regard).
    • Do I need to handle delayed frames when going for lossless ?
    • Why is VLC mad at me ?

    Thanks.