
Recherche avancée
Autres articles (112)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (11486)
-
Extract frames as images from an RTMP stream in real-time
7 novembre 2014, par SoftForgeI am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible ? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
-
AVC-Intra support
20 juillet 2013, par Kieran KunhyaAVC-Intra support
This format has been reverse engineered and x264’s output has almost exactly
the same bitstream as Panasonic cameras and encoders produce. It therefore does
not comply with SMPTE RP2027 since Panasonic themselves do not comply with
their own specification. It has been tested in Avid, Premiere, Edius and
Quantel.Parts of this patch were written by Jason Garrett-Glaser and some reverse
engineering was done by Joseph Artsimovich. -
Quick Sync Video-Transcoding by FFMPEG API
8 octobre 2020, par Paul_ghostI want to use Quick Sync Video technology for transcoding h264 to mjpeg. So I tried it with ffmpeg API in C. And I got some problems with that. So, for understanding hardware transcoding I explored "qsvdec.c" and "vaapi_transcode.c" from https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples. But when I tried to add encoding to "qsvdec.c" (p.s. in function "decode_packet" where comment say put your code) :


encoder_ctx->hw_frames_ctx = av_buffer_ref(decoder_ctx->hw_frames_ctx);
 if (!encoder_ctx->hw_frames_ctx) {
 ret = AVERROR(ENOMEM);
 goto fail;
 }
 encoder_ctx->time_base = input_ctx->time_base;
 encoder_ctx->pix_fmt = AV_PIX_FMT_QSV;
 encoder_ctx->width = decoder_ctx->width;
 encoder_ctx->height = decoder_ctx->height;

 if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) { //enc_codec is mjpeg
 fprintf(stderr, "Failed to open encode codec. Error code: %s\n",
 av_err2str(ret));
 goto fail;
 }




I got error "Failed to open encoder codec error code function not implemented". So what is the reason for this ? Where i can find a tutorial for hardware in ffmpeg API ?