Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (69)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (12961)

  • flvdec : Honor the "flv_metadata" option for the "datastream" metadata field

    9 février 2024, par Martin Storsjö
    flvdec : Honor the "flv_metadata" option for the "datastream" metadata field
    

    By default the option "flv_metadata" (internally using the field
    name "trust_metadata") is set to 0, meaning that we don't allocate
    streams based on information in the metadata, only based on
    actual streams we encounter. However the "datastream" metadata field
    still would allocate a subtitle stream.

    When muxing, the "datastream" field is added if either a data stream
    or subtitle stream is present - but the same metadata field is used
    to preemtively create a subtitle stream only. Thus, if the field
    was added due to a data stream, not a subtitle stream, the demuxer
    would create a stream which won't get any actual packets.

    If there was such an extra, empty subtitle stream, running
    avformat_find_stream_info still used to terminate within reasonable
    time before 3749eede66c3774799766b1f246afae8a6ffc9bb. After that
    commit, it no longer would terminate until it reaches the max
    analyze duration, which is 90 seconds for flv streams (see
    e6a084641aada7a2e4672172f2ee26642800a361,
    24fdf7334d2bb9aab0abdbc878b8ae51eb57c86b and
    f58e011a1f30332ba824c155078ca701e29aef63).

    Before that commit (which removed the deprecated AVStream.codec), the
    "st->codecpar->codec_id = AV_CODEC_ID_TEXT", set within the demuxer,
    would get propagated into st->codec->codec_id by numerous
    avcodec_parameters_to_context(st->codec, st->codecpar), then further
    into st->internal->avctx->codec_id by update_stream_avctx within
    read_frame_internal in libavformat/utils.c (demux.c these days).

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavformat/flvdec.c
  • Stream OpenGL framebuffer over HTTP (via FFmpeg)

    17 juin 2016, par mOfl

    I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.

    What is working right now :

    • Render a scene to an OpenGL framebuffer object
    • Capture the FBO content using NvIFR
    • Encode it to H.264 using NvENC (no CPU round trip required)
    • Download the encoded frame to host memory as a byte array
    • Append this frame to a video file

    None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame’s byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?

    If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg’s job and that the subsequent streaming via FFserver would be simple from there. What I don’t know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.

    The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.

    Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I’m eager to know.

  • Stream OpenGL framebuffer over HTTP (via FFmpeg)

    16 juin 2022, par mOfl

    I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.

    &#xA;&#xA;

    What is working right now :

    &#xA;&#xA;

      &#xA;
    • Render a scene to an OpenGL framebuffer object
    • &#xA;

    • Capture the FBO content using NvIFR
    • &#xA;

    • Encode it to H.264 using NvENC (no CPU round trip required)
    • &#xA;

    • Download the encoded frame to host memory as a byte array
    • &#xA;

    • Append this frame to a video file
    • &#xA;

    &#xA;&#xA;

    None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame's byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?

    &#xA;&#xA;

    If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg's job and that the subsequent streaming via FFserver would be simple from there. What I don't know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.

    &#xA;&#xA;

    The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.

    &#xA;&#xA;

    Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I'm eager to know.

    &#xA;