
Recherche avancée
Autres articles (99)
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...) -
Les sons
15 mai 2013, par -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (16102)
-
ffmpeg : playing media files does not release processor after media ends ?
2 septembre 2017, par Blake SenftnerI have a commercial C++ application which uses FFMPEG’s libav series of dlls to play media in a Windows application. I basically started with the dranger tutorial about two years ago, and created a library that can playback USB cameras, IP camera / online streams, and media files on disk. (http://dranger.com/ffmpeg/)
My question is directed at anyone who has created their own similar library :
I recently noticed after playing a video file from disk (as opposed to a live stream from USB or IP source), my 8 core i7 workstation will show 28-29% CPU usage after a media file has ended. My application can play an unlimited number of videos, and each "virtual video panel" (not a window, just a "virtual tab" created using wxWidgets that holds an OpenGL context that I use to glDrawPixels() to the visible app panel) will play any of the three media types fine (USB, IP stream or media file) and when I stop a USB or IP stream my application’s CPU usage drops to zero. But when I "stop" a media file playing or the media file ends on its own the CPU usage does not drop - until the application quits.
Three media files playing will take my application to 80-83% CPU, and it never drops. UNLESS I reuse that same "virtual video panel" to play a USB or IP stream. If I stop those streams, CPU usage is released.
MP4 (h264) video files exhibit this "holding a processor" problem.
MP4 (mpeg2) files do not.
MP4 (h265) files do not.
MPG (mpeg1) files do not.
ASF (MS MPEG-4 Video v3) files do not.
MKV (vp8) files do not.
MOV files using h265 do not, as well as MOV (h264) files do not.
FLV (sorensen) files do not, as well as FLV (h264) files do not.
So it is not just the h264 codec.
Anyone know what is going on, and how I tell libav to release CPU usage when a media file is no longer playing ?
-
using ffmpeg libavcodec to decode a video file then encode to H264, media file's duration is zero
1er septembre 2017, par QingNeed help, recently, I am using ffmpeg libavcodec to decode a video file then encode to H264 and write to an mp4 media container, finally, the media file’s duration is zero, the following is my code’s workflow:
AVFormatContext* input_format_context = NULL;
AVFormatContext* output_format_context = NULL;
AVIOContext* output_io_context = NULL;
AVCodecContext* input_codec_context = NULL;
AVCodecContext* output_codec_context = NULL;
AVCodec* codec = NULL;
AVStream* input_stream = NULL;
AVStream* output_stream = NULL;
AVFrame* frame = NULL;
int convert_init(const char* input_filename, const char* output_filename)
{
/** Allocate a new encode context */
avformat_open_input(&input_format_context,
input_filename, NULL, NULL);
/** Get information on the input file (number of streams etc.). */
avformat_find_stream_info(input_format_context, NULL);
/** Open the output file to write to it. */
avio_open(&output_io_context, output_filename,
AVIO_FLAG_WRITE);
/** Create a new format context for the output container format. */
output_format_context = avformat_alloc_context();
/** Associate the output file (pointer) with the container format context. */
output_format_context->pb = output_io_context;
/** Guess the desired container format based on the file extension. */
output_format_context->oformat = av_guess_format(NULL,
output_filename, NULL);
av_strlcpy((output_format_context)->filename, output_filename,
sizeof(output_format_context->filename));
/** stream0 is the video stream */
AVStream* input_stream = input_format_context->streams[0];
/**
* Init the input_codec_context
*/
/** Find a decoder for the audio stream. */
codec = avcodec_find_decoder(input_stream->codecpar->codec_id);
/** Allocate a new decode context */
input_codec_context = avcodec_alloc_context3(codec);
/** Initialize the stream parameters with demuxer information */
avcodec_parameters_to_context(input_codec_context,
input_stream->codecpar);
/** Open the decoder for the stream. */
avcodec_open2(input_codec_context, codec, NULL);
/**
* Create an output stream for writing encoded data
*
* AM I MISSING SOMETHING ?
*
*/
output_stream = avformat_new_stream(output_format_context, NULL);
/**
* Init the output_codec_context
*/
/** Find a encoder for the output video stream, using H264. */
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
/** Allocate an encode context. */
output_codec_context = avcodec_alloc_context3(codec);
/**
* Setup encode context parameters.
*
* AM I MISSING SOMETHING ?
*
* */
output_codec_context->bit_rate = input_codec_context->bit_rate;
output_codec_context->width = input_codec_context->width;
output_codec_context->height = input_codec_context->height;
output_codec_context->time_base = (AVRational){1, 25};
output_codec_context->framerate = (AVRational){25, 1};
output_codec_context->gop_size = 10;
output_codec_context->max_b_frames = 1;
output_codec_context->pix_fmt = AV_PIX_FMT_YUV420P;
output_codec_context->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
/** Setup output_stream codecpar. */
avcodec_parameters_from_context(output_stream->codecpar,
codec_context);
/** Alloc an av frame */
frame = av_frame_alloc();
}
void convert_it()
{
AVPacket input_packet;
AVPacket output_packet;
/** Write the media file container header */
avformat_write_header(output_format_context, NULL);
/**
* decode frames and encode to H264
* */
while (1) {
av_init_packet(&input_packet);
input_packet.data = NULL;
input_packet.size = 0;
av_init_packet(&output_packet);
output_packet.data = NULL;
output_packet.size = 0;
/** Read a frame to decode */
av_read_frame(input_format_context, &input_packet);
if (av_read_frame is end of file) {
break;
}
...
...
/** Decoding... */
avcodec_send_packet(input_codec_context, &input_packet);
...
...
/** Get a decoded frame */
avcodec_receive_frame(input_codec_context, frame);
...
...
/** Make the frame writable, is it necessary ?? */
av_frame_make_writable(frame);
/** Encode to H264 */
avcodec_send_frame(output_codec_context, frame);
...
...
/** Get a encoded packet */
avcodec_receive_packet(output_codec_context, &output_packet);
/**
* Write the packet to output.
* Here is the point! should I configure the parameters
* in packet such as 'pts', 'dts', 'duration', etc, if so,
* hwo? or I just directory write the packet to output?
*/
av_interleaved_write_frame(output_format_context, &packet);
}
/** Write the media file container trailer */
av_write_trailer(output_format_context);
}
int main() {
convert_init("./sample.avi", "./output.mp4");
convert_it();
}using VLC or QuickTime to playback the output.mp4 file, it failed, cause the file duration is zero, when dragging the time progress bar, I can see the picture frame clearly, it seems that the encoding packet buffer data is correct, but the timestamp is error, am I missing something when configure the output_stream ? The following is message from ffprobe.
ffprobe output.mp4
ffprobe version 3.3.3 Copyright (c) 2007-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --enable-shared --enable-libmp3lame
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.71.100
Duration: 00:00:00.06, start: 0.000000, bitrate: 6902181 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 720x408 [SAR 1:1 DAR 30:17], 13929056 kb/s, 90k fps, 90k tbr, 90k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler -
How to manipulate large media files in Node.js in a non-blocking way
26 août 2017, par Jacob Prud'hommeI am currently creating a Node.js app that receives an audio/video stream, writes it progressively to the disk, then transcodes it with ffmpeg once the stream has ended and sends it somewhere else to be stored, deleting it locally.
Besides the fact that I can transcode the stream before writing it to streamline the entire thing (this feature is planned), what is the best way to handle these operations on potentially large files ?
I am aware of spawing child processes (the method I’m currently using), but I’m not sure how they actually function, even after much reading. I’m not even sure using "spawn" is exactly what I want here (is "fork" a better option ?).
Essentially, I want to know how to transcode -> upload -> delete the file without blocking Node.js so that multiple users can do the same thing simultaneously. Also, I am thinking of putting all 3 operations in a single bash script so that they happen synchronously in sequential order, is this fine ?