
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (35)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (5469)
-
ffmpeg API muxing h264 endoced frames to mkv
24 mars 2017, par Pawel KHi I’m having some problems with muxing h264 encoded frames into mkv container using code of ffmpeg-3.2.4.
I have ended up with the following code that is a mashup of code found on SO and muxing.c example of ffmpeg :
(and yes I am aware that it is ugly, no errors checked etc. it is meant to be like that for clarity :) )char *filename = "out.mkv";
const uint8_t SPS[] = { 0x67, 0x42, 0x40, 0x1F, 0x96, 0x54, 0x02, 0x80, 0x2D, 0xD0, 0x0F, 0x39, 0xEA };
const uint8_t PPS[] = { 0x68, 0xCE, 0x38, 0x80 };
int fps = 5;
typedef struct OutputStream
{
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void avlog_cb(void *s, int level, const char *szFmt, va_list varg)
{
vprintf(szFmt, varg);
}
void main()
{
AVOutputFormat *fmt;
AVFormatContext *formatCtx;
AVCodec *audio_codec;
AVCodec *video_codec;
OutputStream video_st = { 0 };
OutputStream audio_st = { 0 };
av_register_all();
av_log_set_level(AV_LOG_TRACE);
//av_log_set_callback(avlog_cb);
//allocate output and format ctxs
avformat_alloc_output_context2(&formatCtx, NULL, NULL, filename);
fmt = formatCtx->oformat;
//allocate streams
video_codec = avcodec_find_encoder(fmt->video_codec);
video_st.st = avformat_new_stream(formatCtx, NULL);
video_st.st->id = 0;
AVCodecContext *codecCtx = avcodec_alloc_context3(video_codec);
fmt->video_codec = AV_CODEC_ID_H264;
video_st.enc = codecCtx;
codecCtx->codec_id = fmt->video_codec;
codecCtx->bit_rate = 400000;
codecCtx->width = 1080;
codecCtx->height = 720;
codecCtx->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE;
codecCtx->level = 31;
video_st.st->time_base = (AVRational){ 1, fps };
codecCtx->time_base = video_st.st->time_base;
codecCtx->gop_size = 4;
codecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
//open video codec
codecCtx->extradata_size = 24;
codecCtx->extradata = (uint8_t *)av_malloc(codecCtx->extradata_size);
uint8_t extra_data_array[] = { 0x01, SPS[1], SPS[2], SPS[3], 0xFF, 0xE1, 0xc0, 0, 0x42, 0x40, 0x1F, 0x96, 0x54, 0x02, 0x80, 0x2D, 0xD0, 0x0F, 0x39, 0xEA, 0x03, 0xCE, 0x38, 0x80 };
memcpy(codecCtx->extradata, extra_data_array, codecCtx->extradata_size);
AVCodecContext *c = video_st.enc;
AVDictionary *opt = NULL;
avcodec_open2(c, video_codec, &opt);
avcodec_parameters_from_context(video_st.st->codecpar, c);
//open output file
avio_open(&formatCtx->pb, filename, AVIO_FLAG_WRITE);
//write header
int res = avformat_write_header(formatCtx, NULL);
//write frames
// get the frames from file
uint32_t u32frameCnt = 0;
do
{
int8_t i8frame_name[64] = "";
uint8_t *pu8framePtr = NULL;
AVPacket pkt = { 0 };
av_init_packet(&pkt);
sprintf(i8frame_name, "frames/frame%d.bin", u32frameCnt++);
//reading frames from files
FILE *ph264Frame = fopen(i8frame_name, "r");
if(NULL == ph264Frame)
{
goto leave;
}
//get file size
fseek(ph264Frame, 0L, SEEK_END);
uint32_t u32file_size = 0;
u32file_size = ftell(ph264Frame);
fseek(ph264Frame, 0L, SEEK_SET);
pu8framePtr = malloc(u32file_size);
uint32_t u32readout = fread(pu8framePtr, 1, u32file_size, ph264Frame);
//if the read frame is a key frame i.e. nalu hdr type = 5 set it as a key frame
if(0x65 == pu8framePtr[4])
{
pkt.flags = AV_PKT_FLAG_KEY;
}
pkt.data = (uint8_t *)pu8framePtr;
pkt.size = u32readout;
pkt.pts = u32frameCnt;
pkt.dts = pkt.pts;
av_packet_rescale_ts(&pkt, c->time_base, video_st.st->time_base);
pkt.stream_index = video_st.st->index;
av_interleaved_write_frame(formatCtx, &pkt);
free(pu8framePtr);
fclose(ph264Frame);
}
while(1);
leave:
av_write_trailer(formatCtx);
av_dump_format(formatCtx, 0, filename, 1);
avcodec_free_context(&video_st.enc);
avio_closep(&formatCtx->pb);
avformat_free_context(formatCtx);
}It can be compiled with the following command line (after adding headers) :
gcc file.c -o test_app -I/usr/local/include -L/usr/local/lib -lxcb-shm -lxcb -lX11 -lx264 -lm -lz -pthread -lswresample -lswscale -lavcodec -lavformat -lavdevice -lavutil
The files that are read are valid annexB stream (valid as in it’s playable in vlc after concatenating into file) it is a Constrained Baseline 3.1 profile H264 and it comes from an IPcam’s interleaved RTCP/RTP stream (demuxed)
The result is ... well I don’t see the picture. I get only black screen with the progress bar and timer running. I don’t know if I do something wrong with setting up the codecs and streams, or it’s just wrong timestamps.
I know I got them wrong in some manner but I don’t understand that fully yet (how to calculate the correct presentation times), i.e. the stream and the codec both contain time_base field, and then I know that the sample rate of the video is 90kHz and the frame rate is 5 fpsOn top of it all the examples I’ve found have to some extend deprecated parts that change the flow/meaning of the application and that doesn’t help at all so thus If anyone could help I would appreciate it (I think not only me I would guess)
Regards, Pawel
-
FFmpeg stream extraction modifies subtitles [closed]
21 mai 2024, par user18812922I have a video with the following ffprobe output :


Input #0, matroska,webm, from 'video.mkv':
 Metadata:
 title : Video - 01
 creation_time : 2021-07-14T02:49:59.000000Z
 ENCODER : Lavf58.29.100
 Duration: 00:22:57.28, start: 0.000000, bitrate: 392 kb/s
 Chapters:
 Chapter #0:0: start 0.000000, end 86.169000
 Metadata:
 title : Opening
 Chapter #0:1: start 86.169000, end 641.266000
 Metadata:
 title : Part A
 Chapter #0:2: start 641.266000, end 651.359000
 Metadata:
 title : Eyecatch
 Chapter #0:3: start 651.359000, end 1286.160000
 Metadata:
 title : Part B
 Chapter #0:4: start 1286.160000, end 1356.355000
 Metadata:
 title : Ending
 Chapter #0:5: start 1356.355000, end 1376.876000
 Metadata:
 title : Preview
 Stream #0:0: Video: hevc (Main 10), yuv420p10le(tv, bt709), 854x480 [SAR 1280:1281 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn (default)
 Metadata:
 DURATION : 00:22:56.959000000
 Stream #0:1(eng): Audio: vorbis, 48000 Hz, stereo, fltp (default)
 Metadata:
 title : English [FLAC 2.0]
 DURATION : 00:22:57.278000000
 Stream #0:2(jpn): Audio: vorbis, 48000 Hz, stereo, fltp
 Metadata:
 title : Japanese [FLAC 2.0]
 DURATION : 00:22:57.276000000
 Stream #0:3(eng): Subtitle: ass (ssa)
 Metadata:
 title : Signs and Songs [FMA1394/Redc4t]
 DURATION : 00:22:51.090000000
 Stream #0:4(eng): Subtitle: ass (ssa)
 Metadata:
 title : English [FMA1394/Redc4t]
 DURATION : 00:22:51.090000000
 Stream #0:5(eng): Subtitle: hdmv_pgs_subtitle (pgssub), 1920x1080
 Metadata:
 title : Full English Retail
 DURATION : 00:22:51.120000000
 Stream #0:6: Attachment: ttf
 Metadata:
 filename : 8bitoperator.ttf
 mimetype : application/x-truetype-font
 Stream #0:7: Attachment: ttf
 Metadata:
 filename : Cabin-Bold.ttf
 mimetype : application/x-truetype-font
 Stream #0:8: Attachment: ttf
 Metadata:
 filename : calibrib.ttf
 mimetype : application/x-truetype-font
 Stream #0:9: Attachment: ttf
 Metadata:
 filename : daniel_0.ttf
 mimetype : application/x-truetype-font
 Stream #0:10: Attachment: ttf
 Metadata:
 filename : DEATH_FONT.TTF
 mimetype : application/x-truetype-font
 Stream #0:11: Attachment: ttf
 Metadata:
 filename : Dominican.ttf
 mimetype : application/x-truetype-font
 Stream #0:12: Attachment: ttf
 Metadata:
 filename : gishabd.ttf
 mimetype : application/x-truetype-font
 Stream #0:13: Attachment: ttf
 Metadata:
 filename : PATRICK_0.TTF
 mimetype : application/x-truetype-font
 Stream #0:14: Attachment: ttf
 Metadata:
 filename : Qlassik-Medium.ttf
 mimetype : application/x-truetype-font
Unsupported codec with id 98304 for input stream 6
Unsupported codec with id 98304 for input stream 7
Unsupported codec with id 98304 for input stream 8
Unsupported codec with id 98304 for input stream 9
Unsupported codec with id 98304 for input stream 10
Unsupported codec with id 98304 for input stream 11
Unsupported codec with id 98304 for input stream 12
Unsupported codec with id 98304 for input stream 13
Unsupported codec with id 98304 for input stream 14



I am trying to extract the subtitles, edit them and reattach them to the video.
(I need my program to do that so I don't want to use other software)


Command 1


ffmpeg -i video.mkv -map 0:3 -c:s ssa subs.ass
ffmpeg -i video.mkv -i subs.ass -map 0 -map -0:s -map 1 -c copy out.mkv



Command 2


ffmpeg -i video.mkv -map 0:3 subs.ass
ffmpeg -i video.mkv -i subs.ass -map 0 -map -0:s -map 1 -c copy out.mkv



Command 3


ffmpeg -i video.mkv -map 0:3 subs.srt
ffmpeg -i video.mkv -i subs.srt -map 0 -map -0:s -map 1 -c copy out.mkv



Command 4


ffmpeg -i video.mkv -map 0:3 subs.srt
ffmpeg -i subs.srt subs.ass
ffmpeg -i video.mkv -i subs.ass -map 0 -map -0:s -map 1 -c copy out.mkv



Command 5


ffmpeg -i video.mkv -map 0:3 subs.ass
ffmpeg -i subs.ass subs.srt
ffmpeg -i video.mkv -i subs.srt -map 0 -map -0:s -map 1 -c copy out.mkv



The problem


After extraction the subtitles seem to be really quick, meaning they are displayed and disappear really quickly.


For example the first subtitle is as follows in srt :


1
00:00:03,100 --> 00:00:03,560
<font face="Dominican" size="77" color="#f7f7f7">Within the spreading darkness</font>



Now, in srt it also has wrong size but I assume that's because of the conversion from ass to srt.


If I reattach the subtitle file in the video and open it, it is displayed and disappears way too fast and it doesn't match the original subtitles in the video.


(ie, the original video subtitles are showing for at least a second)


Expected behaviour


The subtitles should be displayed for the same duration as the original subtitles.


NOTE


It's my first question for ffmpeg related issues so feel free to ask me for anything else you may need.


UPDATE 1


I realized that the subtitles were ok for the timings as they had the same line multiple times, so the problem for not playing is something else.


Example of the file


1
00:00:03,100 --> 00:00:03,560
<font face="Dominican" size="77" color="#f7f7f7">Within the spreading darkness</font>

2
00:00:03,560 --> 00:00:04,650
<font face="Dominican" size="77" color="#f7f7f7">Within the spreading darkness</font>

3
00:00:04,650 --> 00:00:05,100
<font face="Dominican" size="77" color="#f7f7f7">Within the spreading darkness</font>



So the problem is that VLC doesn't show more than the first subtitle.


The strange thing is when I use the below command


ffmpeg -i video.mkv -i subs.srt -map 0 -map -0:s -map 1 -c copy -c:s subrip out.mkv



Then more lines of the subtitle (but not all) play.


It stops at the 17th line.


I believe that's an encoder's problem ? but I really don't know.


Also what I noticed is that VLC stops the subtitles but Windows Media Player (Windows 11 version) display the subtitles correctly even after the 17th line.


BUT, if I add subtitles from another video they are played correctly in both VLC and Windows Media Player.


Update 2
As @Gyan said in his answer I should use the following command


ffmpeg -i video.mkv -map 0:3 -c:s copy subs.ass



But then if I attach the subs again with


ffmpeg -i video.mkv -i subs.ass -map 0 -map -0:s -map 1 -c copy -c:s ass out.mkv



The subtitles show up to 17th line in both VLC and Windows Media Player.


or


ffmpeg -i video.mkv -i .\subs.ass -map 0 -map -0:s -map 1 -c copy out.mkv



The subtitles do not show up at all. (Not even in Windows Media Player)


-
nodejs ffmpeg play video at specific time and stream it to client
12 mars 2020, par bluejaykeI’m trying to make a basic online video editor with nodeJS and ffmpeg.
To do this I need 2 steps :
-
set the in-and-out times of the videos from the client, which requires the client to view the video at specific times, and switch the position of the video. Meaning, if a single video is used as an input, and split it into smaller parts, it needs to replay from the starting time of the next edited segment, if that makes sense.
-
send the input-output data to nodejs and export it with ffmpeg as a finished vide.
At first I wanted to do 1. purely on the client, then upload the source video(s) to nodeJS, and generate the same result with ffmpeg, and send back the result.
But there are may problems with video processing on the client side in HTML at the moment, so now I have a change of plans : to do all of the processing on the nodeJS server, including the video playing.
This is the part I am stuck at now. I’m aware that ffmpeg can be used in many different ways from nodeJS, but I have not found a way to play a .mp4 webm video in realtime with ffmpeg, at a specific timestamp, and send the streaming video (again, at a certain timestamp) to the client.
I’ve seen the pipe:1 attribute from ffmpeg, but I couldn’t find any tutorials to get it working with an mp4 webm video, and to parse the stdout data somehow with nodejs and send it to the client. And even if I could get that part to work, I still have no idea to play the video, in realtime, at a certain timestamp.
I’ve also seen ffplay, but that’s only for testing as far as I know ; I haven’t seen any way of getting the video data from it in realtime with nodejs.
So :
how can I play a video, in nodeJS, at a specific time (preferably with ffmpeg), and send it back to the client in realtime ?
What I have already seen :
Best approach to real time http streaming to HTML5 video client
Live streaming using FFMPEG to web audio api
Ffmpeg - How to force MJPEG output of whole frames ?
ffmpeg : Render webm from stdin using NodeJS
No data written to stdin or stderr from ffmpeg
node.js live streaming ffmpeg stdout to res
Realtime video conversion using nodejs and ffmpeg
Pipe output of ffmpeg using nodejs stdout
can’t re-stream using FFMPEG to MP4 HTML5 video
FFmpeg live streaming webm video to multiple http clients over Nodejs
http://www.mobiuso.com/blog/2018/04/18/video-processing-with-node-ffmpeg-and-gearman/
stream mp4 video with node fluent-ffmpeg
How to get specific start & end time in ffmpeg by Node JS ?
Live streaming : node-media-server + Dash.js configured for real-time low latency
Low Latency (50ms) Video Streaming with NODE.JS and html5
Server node.js for livestreaming
Stream part of the video to the client
Video streaming with HTML 5 via node.js
How to (pseudo) stream H.264 video - in a cross browser and html5 way ?
How to stream video data to a video element ?
How do I convert an h.264 stream to MP4 using ffmpeg and pipe the result to the client ?
https://medium.com/@brianshaler/on-the-fly-video-rendering-with-node-js-and-ffmpeg-165590314f2
-