Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Liste des distributions compatibles

    26 avril 2011, par

    Le tableau ci-dessous correspond à la liste des distributions Linux compatible avec le script d’installation automatique de MediaSPIP. Nom de la distributionNom de la versionNuméro de version Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    Si vous souhaitez nous aider à améliorer cette liste, vous pouvez nous fournir un accès à une machine dont la distribution n’est pas citée ci-dessus ou nous envoyer le (...)

Sur d’autres sites (7945)

  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.

  • Android - How to merge the 2 video SIDE by SIDE ? [on hold]

    25 avril 2015, par Lakshmanan

    I want merge two video file (Mp4 file) SIDE BY SIDE. I have tried with following option.

    1) FFMPEG - It is just merging one after another.

    2) mp4parser - It is also did the same, I can merge it by one after another.

    3) Tried Screen Capture Android - > I can merge it side by side, But in the Screen Capture i could not get the Audio of the file.

    The Purpose of this merging is, I have a main screen, which have some game play, user can touch on some portion of the screen it will play some sound as well as some animation.

    I have the Record option in my game screen, So when user click on the record option, I need to record the user face reaction via Front camera as well as the Game Play , So once record is completed I need to play the Game recording as left side and face recording video as right side.

    I did this using by saving touch event in the Database and played in left side of the screen at right side I played the video which is recorded by Front Camera.

    But i need to share this same video to social media, For this I need as single video. So i can screen capture the Game Play(left side) as video file and also i have the Front camera recorded video. So i need to merge this two video as single video as like its playing in my app to share.

    Please let me know is there any way to do this.

  • Blitted OpenGL Textures take less memory and CPU

    16 avril 2015, par Pedro H. Forli

    I’m making a game using pygame + pyopengl, and right now i’m trying to make a video player on this context. To do so I use ffmpeg to load different video formats, then convert each frame to an opengl texture, as designed below, and then play the video.

    class Texture(object):
       def __init__(self, data, w=0, h=0):
           """
           Initialize the texture from 3 diferents types of data:
           filename = open the image, get its string and produce texture
           surface = get its string and produce texture
           string surface = gets it texture and use w and h provided
           """
           if type(data) == str:
               texture_data = self.load_image(data)

           elif type(data) == pygame.Surface:
               texture_data = pygame.image.tostring(data, "RGBA", True)
               self.w, self.h = data.get_size()

           elif type(data) == bytes:
               self.w, self.h = w, h
               texture_data = data

           self.texID = 0
           self.load_texture(texture_data)

       def load_image(self, data):
           texture_surface = pygame.image.load(data).convert_alpha()
           texture_data = pygame.image.tostring(texture_surface, "RGBA", True)
           self.w, self.h = texture_surface.get_size()

           return texture_data

       def load_texture(self, texture_data):
           self.texID = glGenTextures(1)

           glBindTexture(GL_TEXTURE_2D, self.texID)
           glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
           glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
           glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.w,
                        self.h, 0, GL_RGBA, GL_UNSIGNED_BYTE,
                        texture_data)

    Problem is that when i load all the textures of a given video, my RAM goes off the ceiling, about 800mb. But it’s possible to work around this by blitting each texture as it loads, like shown below.

    def render():
       glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
       glLoadIdentity()
       glDisable(GL_LIGHTING)
       glEnable(GL_TEXTURE_2D)
       glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
       glClearColor(0, 0, 0, 1.0)

    def Draw(texture, top, left, bottom, right):
       """
       Draw the image on the Opengl Screen
       """
       # Make sure he is looking at the position (0,0,0)
       glBindTexture(GL_TEXTURE_2D, texture.texID)
       glBegin(GL_QUADS)

       # The top left of the image must be the indicated position
       glTexCoord2f(0.0, 1.0)
       glVertex2f(left, top)

       glTexCoord2f(1.0, 1.0)
       glVertex2f(right, top)

       glTexCoord2f(1.0, 0.0)
       glVertex2f(right, bottom)

       glTexCoord2f(0.0, 0.0)
       glVertex2f(left, bottom)

       glEnd()

    def update(t) :
    render()
    Draw(t, -0.5, -0.5, 0.5, 0.5)

    # Check for basic Events on the pygame interface
    for event in pygame.event.get():
       BASIC_Game.QUIT_Event(event)

    pygame.display.flip()

    Although this reduces the RAM consumption to an acceptable value it makes the loading time bigger than the video length.

    I really don’t understand why opengl works this way, but is there a way to make a texture efficient without blitting it first ?