Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (76)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (12474)

  • Send H.264 encoded stream through RTMP using FFmpeg

    15 novembre 2016, par Galaxy

    I followed this to encode a sequences images to h.264 video.

    Here is outputting part of my code :

    int srcstride = outwidth*4;
    sws_scale(convertCtx, src_data, &srcstride, 0, outheight, pic_in.img.plane, pic_in.img.i_stride);
    x264_nal_t* nals;
    int i_nals;
    int frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
    if (frame_size) {
       fwrite(nals[0].p_payload, frame_size, 1, fp);
    }

    This is in a loop to process frames and write them into a file.

    Now, I’m trying to stream these encoded frames through RTMP. As I know, the container for the RTMP is FLV. So I used command line as a trial :

    ffmpeg -i test.h264 -vcodec copy -f flv rtmp://localhost:1935/hls/test

    This one works well as streaming a h.264 encoded video file.

    But how can I implement it as C++ code and stream the frames at the same time when they are generated, just like what I did to stream my Facetime camera.

    ffmpeg -f avfoundation -pix_fmt uyvy422  -video_size 1280x720 -framerate 30 -i "1:0" -pix_fmt yuv420p -vcodec libx264 -preset veryfast -acodec libvo_aacenc -f flv -framerate 30 rtmp://localhost:1935/hls/test

    This may be a common and practical topic. But I’m stuck here for days, really need some relevant exprience. Thank you !

  • Java - Stream OpenGL Display to Android

    24 octobre 2016, par Intektor

    I tried to solve this problem for days now, but I couldn’t find a working solution. I am trying to stream my game screen (lwjgl) to my android smartphone(I have a frame buffer with the texture), and I already built a fully working packet system and all that stuff. But there are several problem I have no idea how to solve them, first of all, I don’t know in which format I should send the frame buffer, e.g I can’t send it as a Buffered Image, because it doesn’t exist on android. I tried using the jcodec library, but there is no documentation for it, and I didn’t find any examples that fit my case. I think I have to encode and decode it with h264 to make it a realtime live stream(that’s very important). I also heard about ffmpeg (and I found a java library for it : https://github.com/bramp/ffmpeg-cli-wrapper) but there is again no documentation for how to use it to stream it to my mobile. Also I have the problem, that when would get the frames to my smartphone, how can I make them load by the graphics card

    Here is what I have done so far :
    My packet :

    public class ImagePacketToClient implements Packet {

    public byte[] jpgInfo;
    public int width;
    public int height;

    BufferedImage image;

    public ImagePacketToClient() {
    }

    public ImagePacketToClient(BufferedImage image, int width, int height) {
       this.image = image;
       this.width = width;
       this.height = height;
    }

    @Override
    public void write(DataOutputStream out) throws IOException {
       ByteArrayOutputStream baos = new ByteArrayOutputStream();
       ImageIO.write(image, "jpg", baos);
       baos.flush();
       byte[] bytes = baos.toByteArray();
       baos.close();
       out.writeInt(bytes.length);
       for (byte aByte : bytes) {
           out.writeInt(aByte);
       }
    }

    @Override
    public void read(DataInputStream in) throws IOException {
       int length = in.readInt();
       jpgInfo = new byte[length];
       for (int i = 0; i < length; i++) {
           jpgInfo[i] = in.readByte();
       }
    }

    The code that gets called after the rendering has finished : mc.framebuffer is the frame buffer I can use :

    ScaledResolution resolution = new ScaledResolution(mc);
               BufferedImage screenshot = ScreenShotHelper.createScreenshot(resolution.getScaledWidth(), resolution.getScaledHeight(), mc.getFramebuffer());
               ImagePacketToClient packet = new ImagePacketToClient(screenshot, screenshot.getWidth(), screenshot.getHeight());
               PacketHelper.sendPacket(packet, CardboardMod.communicator.connectedSocket);
               screenshot.flush();

    public static BufferedImage createScreenshot(int width, int height, Framebuffer framebufferIn)
    {
       if (OpenGlHelper.isFramebufferEnabled())
       {
           width = framebufferIn.framebufferTextureWidth;
           height = framebufferIn.framebufferTextureHeight;
       }

       int i = width * height;

       if (pixelBuffer == null || pixelBuffer.capacity() < i)
       {
           pixelBuffer = BufferUtils.createIntBuffer(i);
           pixelValues = new int[i];
       }

       GlStateManager.glPixelStorei(3333, 1);
       GlStateManager.glPixelStorei(3317, 1);
       pixelBuffer.clear();

       if (OpenGlHelper.isFramebufferEnabled())
       {
           GlStateManager.bindTexture(framebufferIn.framebufferTexture);
           GlStateManager.glGetTexImage(3553, 0, 32993, 33639, pixelBuffer);
       }
       else
       {
           GlStateManager.glReadPixels(0, 0, width, height, 32993, 33639, pixelBuffer);
       }

       pixelBuffer.get(pixelValues);
       TextureUtil.processPixelValues(pixelValues, width, height);
       BufferedImage bufferedimage;

       if (OpenGlHelper.isFramebufferEnabled())
       {
           bufferedimage = new BufferedImage(framebufferIn.framebufferWidth, framebufferIn.framebufferHeight, 1);
           int j = framebufferIn.framebufferTextureHeight - framebufferIn.framebufferHeight;

           for (int k = j; k < framebufferIn.framebufferTextureHeight; ++k)
           {
               for (int l = 0; l < framebufferIn.framebufferWidth; ++l)
               {
                   bufferedimage.setRGB(l, k - j, pixelValues[k * framebufferIn.framebufferTextureWidth + l]);
               }
           }
       }
       else
       {
           bufferedimage = new BufferedImage(width, height, 1);
           bufferedimage.setRGB(0, 0, width, height, pixelValues, 0, width);
       }

       return bufferedimage;
    }

    Honestly I don’t want to use this Buffered Image Stuff, because it halfs my framerate, and that’s not good.
    And I don’t have any code for my android application yet, because I couldn’t figure out how I could get this image recreated on Android, and how to load it after that.
    I hope you understand my problem and I am happy about every tip you can give to me :)

  • Add watermark to all videos in a folder and enable it only during certain time intervals

    20 octobre 2016, par Neo Herakles

    I’m making a batch file to watermark all my videos in a folder using FFMPEG, I have to place the watermark at 1/3 of the duration of the video, I currently have this, it worked individually but I can’t seem to make it work for the whole folder, what am I doing wrong ? Also, is there a way to enable the watermark multiple times ? once at 1/3 of the duration, another at 2/3 of the duration.

    @echo off
    setlocal
    for %%I in ("%~dp0\water\*.mp4") do (
      for /F "delims=" %%I in ('ffprobe.exe -v error -show_entries format^=duration -of default^=noprint_wrappers^=1:nokey^=1 %%I.mp4 2^>^&1') do set "duration=%%I"
      'ffmpeg.exe -i "%%I" -i Watermark.png -filter_complex "[0:v]scale=iw:ih[v0];[1:v][v0]scale2ref=iw/6:ih/10[logo][0v];[0v][logo]overlay=W-w-3:H-h-3:enable='between(t,%duration%/3,(%duration%/3)+2)'[v]" -map "[v]" -map 0:a -codec:v libx264 -preset ultrafast -crf 23 -codec:a copy "%~dp0\out\%%~nI.mp4"'
    )
    endlocal
    pause

    Thanks a lot for all the help I’ve received during these days, it really helped me to improve, although I still have a long way to go.