Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (67)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (9953)

  • How Can I Configure Storybook to Use React-App-Rewired ?

    8 août 2022, par joseph

    I'm working on a project that implements react-app-rewired to send headers to the server in order to bypass ReferenceError: SharedArrayBuffer is not defined (I'm getting this error from using the @ffmpeg/ffmpeg library).

    &#xA;

    // config-overrides.js&#xA;const {&#xA;  override,&#xA;  // disableEsLint,&#xA;  // addBabelPlugins,&#xA;  // overrideDevServer&#xA;} = require(&#x27;customize-cra&#x27;)&#xA;&#xA;module.exports = {&#xA;  devServer(configFunction) {&#xA;    // eslint-disable-next-line func-names&#xA;    return function (proxy, allowedHost) {&#xA;      const config = configFunction(proxy, allowedHost)&#xA;&#xA;      // Set loose allow origin header to prevent CORS issues&#xA;      config.headers = {&#xA;        &#x27;Access-Control-Allow-Origin&#x27;: &#x27;*&#x27;,&#xA;        &#x27;Cross-Origin-Opener-Policy&#x27;: &#x27;same-origin&#x27;,&#xA;        &#x27;Cross-Origin-Embedder-Policy&#x27;: &#x27;require-corp&#x27;,&#xA;        &#x27;Cross-Origin-Resource-Policy&#x27;: &#x27;cross-origin&#x27;&#xA;      }&#xA;&#xA;      return config&#xA;    }&#xA;  }&#xA;}&#xA;

    &#xA;

    // package.json&#xA;"scripts": {&#xA;  "start": "react-app-rewired start",&#xA;  "build": "react-app-rewired build",&#xA;  "test": "react-app-rewired test  --transformIgnorePatterns \"node_modules/(?!siriwave)/\"",&#xA;  "eject": "react-scripts eject",&#xA;  "storybook": "start-storybook -p 6006 -s public",&#xA;  "build-storybook": "build-storybook -s public"&#xA;}&#xA;

    &#xA;

    Though this works when I run npm start, meaning the headers get sent to the server, it doesn't work when I run npm run storybook, and I still get the SharedArrayBuffer is not defined error. I'm assuming it's because npm run storybook still uses react-scripts as opposed to react-app-rewired under the hood, but I'm not sure where I can change the configurations for this. Any ideas ?

    &#xA;

  • flvdec : Honor the "flv_metadata" option for the "datastream" metadata field

    9 février 2024, par Martin Storsjö
    flvdec : Honor the "flv_metadata" option for the "datastream" metadata field
    

    By default the option "flv_metadata" (internally using the field
    name "trust_metadata") is set to 0, meaning that we don't allocate
    streams based on information in the metadata, only based on
    actual streams we encounter. However the "datastream" metadata field
    still would allocate a subtitle stream.

    When muxing, the "datastream" field is added if either a data stream
    or subtitle stream is present - but the same metadata field is used
    to preemtively create a subtitle stream only. Thus, if the field
    was added due to a data stream, not a subtitle stream, the demuxer
    would create a stream which won't get any actual packets.

    If there was such an extra, empty subtitle stream, running
    avformat_find_stream_info still used to terminate within reasonable
    time before 3749eede66c3774799766b1f246afae8a6ffc9bb. After that
    commit, it no longer would terminate until it reaches the max
    analyze duration, which is 90 seconds for flv streams (see
    e6a084641aada7a2e4672172f2ee26642800a361,
    24fdf7334d2bb9aab0abdbc878b8ae51eb57c86b and
    f58e011a1f30332ba824c155078ca701e29aef63).

    Before that commit (which removed the deprecated AVStream.codec), the
    "st->codecpar->codec_id = AV_CODEC_ID_TEXT", set within the demuxer,
    would get propagated into st->codec->codec_id by numerous
    avcodec_parameters_to_context(st->codec, st->codecpar), then further
    into st->internal->avctx->codec_id by update_stream_avctx within
    read_frame_internal in libavformat/utils.c (demux.c these days).

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavformat/flvdec.c
  • Stream OpenGL framebuffer over HTTP (via FFmpeg)

    17 juin 2016, par mOfl

    I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.

    What is working right now :

    • Render a scene to an OpenGL framebuffer object
    • Capture the FBO content using NvIFR
    • Encode it to H.264 using NvENC (no CPU round trip required)
    • Download the encoded frame to host memory as a byte array
    • Append this frame to a video file

    None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame’s byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?

    If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg’s job and that the subsequent streaming via FFserver would be simple from there. What I don’t know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.

    The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.

    Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I’m eager to know.