Recherche avancée

Médias (91)

Autres articles (68)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (7249)

  • avformat/segment : Fix leak and invalid free of AVIOContext

    6 septembre 2020, par Andreas Rheinhardt
    avformat/segment : Fix leak and invalid free of AVIOContext
    

    seg_init() and seg_write_header() currently contain a few error paths
    in which an already opened AVIOContext for the child muxer leaks (namely
    if there are unrecognized options for the child muxer or if writing the
    header of the child muxer fails) ; the reason for this is that this
    AVIOContext is not closed in the deinit function. If all goes well, it
    is closed when writing the trailer. From this it also follows that the
    AVIOContext also leaks when the trailer is never written, even when
    writing the header succeeds.

    But simply freeing said AVIOContext in the deinit function is
    complicated by the fact that the AVIOContext may or may not have been
    opened via the io_open callback : If options are set to discard header
    and trailer, said AVIOContext can also be a null context which must not
    be closed via the io_close callback. This may lead to crashes, as
    io_close may presume the AVIOContext's opaque to be set. It currently
    works with the default io_close callback which simply calls avio_close(),
    because avio_close() doesn't care about opaque being NULL since commit
    6e8e8431e15a58aa44cfdd8c11f9ea096837c0fa. Therefore this commit records
    which of the two kinds of AVIOContext is currently in use to use the
    right way to close it.

    Finally there was one instance (namely if initializing the child muxer
    fails with no unrecognized options) where the AVIOContext was always
    closed via the io_close callback. The above remark applies to this ; it
    has been fixed, too.

    Reviewed-by : Ridley Combs <rcombs@rcombs.me>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/segment.c
  • Stream OpenGL framebuffer over HTTP (via FFmpeg)

    16 juin 2022, par mOfl

    I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.

    &#xA;&#xA;

    What is working right now :

    &#xA;&#xA;

      &#xA;
    • Render a scene to an OpenGL framebuffer object
    • &#xA;

    • Capture the FBO content using NvIFR
    • &#xA;

    • Encode it to H.264 using NvENC (no CPU round trip required)
    • &#xA;

    • Download the encoded frame to host memory as a byte array
    • &#xA;

    • Append this frame to a video file
    • &#xA;

    &#xA;&#xA;

    None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame's byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?

    &#xA;&#xA;

    If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg's job and that the subsequent streaming via FFserver would be simple from there. What I don't know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.

    &#xA;&#xA;

    The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.

    &#xA;&#xA;

    Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I'm eager to know.

    &#xA;

  • Stream OpenGL framebuffer over HTTP (via FFmpeg)

    17 juin 2016, par mOfl

    I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.

    What is working right now :

    • Render a scene to an OpenGL framebuffer object
    • Capture the FBO content using NvIFR
    • Encode it to H.264 using NvENC (no CPU round trip required)
    • Download the encoded frame to host memory as a byte array
    • Append this frame to a video file

    None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame’s byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?

    If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg’s job and that the subsequent streaming via FFserver would be simple from there. What I don’t know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.

    The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.

    Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I’m eager to know.