Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (111)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (9982)

  • Java - Stream OpenGL Display to Android

    24 octobre 2016, par Intektor

    I tried to solve this problem for days now, but I couldn’t find a working solution. I am trying to stream my game screen (lwjgl) to my android smartphone(I have a frame buffer with the texture), and I already built a fully working packet system and all that stuff. But there are several problem I have no idea how to solve them, first of all, I don’t know in which format I should send the frame buffer, e.g I can’t send it as a Buffered Image, because it doesn’t exist on android. I tried using the jcodec library, but there is no documentation for it, and I didn’t find any examples that fit my case. I think I have to encode and decode it with h264 to make it a realtime live stream(that’s very important). I also heard about ffmpeg (and I found a java library for it : https://github.com/bramp/ffmpeg-cli-wrapper) but there is again no documentation for how to use it to stream it to my mobile. Also I have the problem, that when would get the frames to my smartphone, how can I make them load by the graphics card

    Here is what I have done so far :
    My packet :

    public class ImagePacketToClient implements Packet {

    public byte[] jpgInfo;
    public int width;
    public int height;

    BufferedImage image;

    public ImagePacketToClient() {
    }

    public ImagePacketToClient(BufferedImage image, int width, int height) {
       this.image = image;
       this.width = width;
       this.height = height;
    }

    @Override
    public void write(DataOutputStream out) throws IOException {
       ByteArrayOutputStream baos = new ByteArrayOutputStream();
       ImageIO.write(image, "jpg", baos);
       baos.flush();
       byte[] bytes = baos.toByteArray();
       baos.close();
       out.writeInt(bytes.length);
       for (byte aByte : bytes) {
           out.writeInt(aByte);
       }
    }

    @Override
    public void read(DataInputStream in) throws IOException {
       int length = in.readInt();
       jpgInfo = new byte[length];
       for (int i = 0; i < length; i++) {
           jpgInfo[i] = in.readByte();
       }
    }

    The code that gets called after the rendering has finished : mc.framebuffer is the frame buffer I can use :

    ScaledResolution resolution = new ScaledResolution(mc);
               BufferedImage screenshot = ScreenShotHelper.createScreenshot(resolution.getScaledWidth(), resolution.getScaledHeight(), mc.getFramebuffer());
               ImagePacketToClient packet = new ImagePacketToClient(screenshot, screenshot.getWidth(), screenshot.getHeight());
               PacketHelper.sendPacket(packet, CardboardMod.communicator.connectedSocket);
               screenshot.flush();

    public static BufferedImage createScreenshot(int width, int height, Framebuffer framebufferIn)
    {
       if (OpenGlHelper.isFramebufferEnabled())
       {
           width = framebufferIn.framebufferTextureWidth;
           height = framebufferIn.framebufferTextureHeight;
       }

       int i = width * height;

       if (pixelBuffer == null || pixelBuffer.capacity() < i)
       {
           pixelBuffer = BufferUtils.createIntBuffer(i);
           pixelValues = new int[i];
       }

       GlStateManager.glPixelStorei(3333, 1);
       GlStateManager.glPixelStorei(3317, 1);
       pixelBuffer.clear();

       if (OpenGlHelper.isFramebufferEnabled())
       {
           GlStateManager.bindTexture(framebufferIn.framebufferTexture);
           GlStateManager.glGetTexImage(3553, 0, 32993, 33639, pixelBuffer);
       }
       else
       {
           GlStateManager.glReadPixels(0, 0, width, height, 32993, 33639, pixelBuffer);
       }

       pixelBuffer.get(pixelValues);
       TextureUtil.processPixelValues(pixelValues, width, height);
       BufferedImage bufferedimage;

       if (OpenGlHelper.isFramebufferEnabled())
       {
           bufferedimage = new BufferedImage(framebufferIn.framebufferWidth, framebufferIn.framebufferHeight, 1);
           int j = framebufferIn.framebufferTextureHeight - framebufferIn.framebufferHeight;

           for (int k = j; k < framebufferIn.framebufferTextureHeight; ++k)
           {
               for (int l = 0; l < framebufferIn.framebufferWidth; ++l)
               {
                   bufferedimage.setRGB(l, k - j, pixelValues[k * framebufferIn.framebufferTextureWidth + l]);
               }
           }
       }
       else
       {
           bufferedimage = new BufferedImage(width, height, 1);
           bufferedimage.setRGB(0, 0, width, height, pixelValues, 0, width);
       }

       return bufferedimage;
    }

    Honestly I don’t want to use this Buffered Image Stuff, because it halfs my framerate, and that’s not good.
    And I don’t have any code for my android application yet, because I couldn’t figure out how I could get this image recreated on Android, and how to load it after that.
    I hope you understand my problem and I am happy about every tip you can give to me :)

  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.

  • FFMPEG and DirectX Capture in C++

    13 décembre 2016, par tankyx

    I have a system that allows me to capture a window and save it as a mp4, using ffmpeg. I use gdigrab to capture the frame, but it is fairly slow (60ms per av_read_frame calls)

    I know I can capture a game using the DirectX API, but I don’t know how to convert the resulting BMP to an AVFrame.

    The following code is the DirectX code I use to capture the frame

    extern void* pBits;
    extern IDirect3DDevice9* g_pd3dDevice;
    IDirect3DSurface9* pSurface;
    g_pd3dDevice->CreateOffscreenPlainSurface(ScreenWidth, ScreenHeight,
                                         D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH,
                                         &pSurface, NULL);
    g_pd3dDevice->GetFrontBufferData(0, pSurface);
    D3DLOCKED_RECT lockedRect;
    pSurface->LockRect(&lockedRect,NULL,
                  D3DLOCK_NO_DIRTY_UPDATE|
                  D3DLOCK_NOSYSLOCK|D3DLOCK_READONLY)));
    for( int i=0 ; i < ScreenHeight ; i++)
    {
       memcpy( (BYTE*) pBits + i * ScreenWidth * BITSPERPIXEL / 8 ,
            (BYTE*) lockedRect.pBits + i* lockedRect.Pitch ,
            ScreenWidth * BITSPERPIXEL / 8);
    }
    g_pSurface->UnlockRect();
    pSurface->Release();

    And here is my read loop :

    {
       while (1) {
       if (av_read_frame(pFormatCtx, &packet) < 0 || exit)
           break;
       if (packet.stream_index == videoindex) {
           // Decode video frame
           av_packet_rescale_ts(&packet, { 1, std::stoi(pParser->GetVal("video-fps")) }, pCodecCtx->time_base);
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if (frameFinished) {
               pFrame->pts = i;
               i++;
               sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
               pFrameRGB->pts = pFrame->pts;
               enc.encodeFrame(pFrameRGB);

       }
       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }

    How can I create an AVFrame using the bmp I have, without using the av_read_frame ?