
Recherche avancée
Autres articles (47)
-
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6046)
-
Stream OpenGL framebuffer over HTTP (via FFmpeg)
17 juin 2016, par mOflI have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.
What is working right now :
- Render a scene to an OpenGL framebuffer object
- Capture the FBO content using NvIFR
- Encode it to H.264 using NvENC (no CPU round trip required)
- Download the encoded frame to host memory as a byte array
- Append this frame to a video file
None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame’s byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?
If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg’s job and that the subsequent streaming via FFserver would be simple from there. What I don’t know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.
The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.
Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I’m eager to know.
-
flvdec : Honor the "flv_metadata" option for the "datastream" metadata field
9 février 2024, par Martin Storsjöflvdec : Honor the "flv_metadata" option for the "datastream" metadata field
By default the option "flv_metadata" (internally using the field
name "trust_metadata") is set to 0, meaning that we don't allocate
streams based on information in the metadata, only based on
actual streams we encounter. However the "datastream" metadata field
still would allocate a subtitle stream.When muxing, the "datastream" field is added if either a data stream
or subtitle stream is present - but the same metadata field is used
to preemtively create a subtitle stream only. Thus, if the field
was added due to a data stream, not a subtitle stream, the demuxer
would create a stream which won't get any actual packets.If there was such an extra, empty subtitle stream, running
avformat_find_stream_info still used to terminate within reasonable
time before 3749eede66c3774799766b1f246afae8a6ffc9bb. After that
commit, it no longer would terminate until it reaches the max
analyze duration, which is 90 seconds for flv streams (see
e6a084641aada7a2e4672172f2ee26642800a361,
24fdf7334d2bb9aab0abdbc878b8ae51eb57c86b and
f58e011a1f30332ba824c155078ca701e29aef63).Before that commit (which removed the deprecated AVStream.codec), the
"st->codecpar->codec_id = AV_CODEC_ID_TEXT", set within the demuxer,
would get propagated into st->codec->codec_id by numerous
avcodec_parameters_to_context(st->codec, st->codecpar), then further
into st->internal->avctx->codec_id by update_stream_avctx within
read_frame_internal in libavformat/utils.c (demux.c these days).Signed-off-by : Martin Storsjö <martin@martin.st>
-
How Can I Configure Storybook to Use React-App-Rewired ?
8 août 2022, par josephI'm working on a project that implements react-app-rewired to send headers to the server in order to bypass
ReferenceError: SharedArrayBuffer is not defined
(I'm getting this error from using the@ffmpeg/ffmpeg
library).

// config-overrides.js
const {
 override,
 // disableEsLint,
 // addBabelPlugins,
 // overrideDevServer
} = require('customize-cra')

module.exports = {
 devServer(configFunction) {
 // eslint-disable-next-line func-names
 return function (proxy, allowedHost) {
 const config = configFunction(proxy, allowedHost)

 // Set loose allow origin header to prevent CORS issues
 config.headers = {
 'Access-Control-Allow-Origin': '*',
 'Cross-Origin-Opener-Policy': 'same-origin',
 'Cross-Origin-Embedder-Policy': 'require-corp',
 'Cross-Origin-Resource-Policy': 'cross-origin'
 }

 return config
 }
 }
}



// package.json
"scripts": {
 "start": "react-app-rewired start",
 "build": "react-app-rewired build",
 "test": "react-app-rewired test --transformIgnorePatterns \"node_modules/(?!siriwave)/\"",
 "eject": "react-scripts eject",
 "storybook": "start-storybook -p 6006 -s public",
 "build-storybook": "build-storybook -s public"
}



Though this works when I run
npm start
, meaning the headers get sent to the server, it doesn't work when I runnpm run storybook
, and I still get theSharedArrayBuffer is not defined
error. I'm assuming it's becausenpm run storybook
still usesreact-scripts
as opposed toreact-app-rewired
under the hood, but I'm not sure where I can change the configurations for this. Any ideas ?