
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (10029)
-
node js ffmpeg hls stream sound is repeating and overlays to each other
1er avril 2022, par MartyI'm trying to stream video and audio with rtmp and I'm stack with problem that I hear sound which overlays to each other and always repeating. But my target is to not repeating and overlay audio to each other. Parsing hls performs with
hls.js


This is my ffmpeg config which creates 4 streaming files which includes in master file
index.m3u8


let argv = ['-y', '-i', inPath];
 Array.prototype.push.apply(argv, [
 '-filter_complex', '[0:v]split=4[v1][v2][v3][v4];[v1]copy[v1out];[v2]scale=w=1280:h=720[v2out];[v3]scale=w=800:h=480[v3out];[v4]scale=w=640:h=360[v4out]',
 '-map', '[v1out]', '-c:v:0', 'libx264', '-x264-params', 'nal-hrd=cbr:force-cfr=1', '-b:v:0', '5M', '-maxrate:v:0', '5M', '-bufsize:v:0', '5M', '-preset', 'veryfast', '-g', '48', '-sc_threshold', '0', '-keyint_min', '48',
 '-map', '[v2out]', '-c:v:1', 'libx264', '-x264-params', 'nal-hrd=cbr:force-cfr=1', '-b:v:1', '3M', '-maxrate:v:1', '3M', '-bufsize:v:1', '3M', '-preset', 'veryfast', '-g', '48', '-sc_threshold', '0', '-keyint_min', '48',
 '-map', '[v3out]', '-c:v:2', 'libx264', '-x264-params', 'nal-hrd=cbr:force-cfr=1', '-b:v:2', '1M', '-maxrate:v:2', '1M', '-bufsize:v:2', '1M', '-preset', 'veryfast', '-g', '48', '-sc_threshold', '0', '-keyint_min', '48',
 '-map', '[v4out]', '-c:v:3', 'libx264', '-x264-params', 'nal-hrd=cbr:force-cfr=1', '-b:v:3', '600k', '-maxrate:v:3', '600k', '-bufsize:v:3', '600k', '-preset', 'veryfast', '-g', '48', '-sc_threshold', '0', '-keyint_min', '48',
 '-map', 'a:0', '-c:a:0', 'aac', '-b:a:0', '96k', '-ac', '2',
 '-map', 'a:0', '-c:a:1', 'aac', '-b:a:1', '96k', '-ac', '2',
 '-map', 'a:0', '-c:a:2', 'aac', '-b:a:2', '96k', '-ac', '2',
 '-map', 'a:0', '-c:a:3', 'aac', '-b:a:3', '96k', '-ac', '2',
 '-f', 'hls', '-hls_time', '2', '-hls_flags', 'independent_segments', '-hls_list_size', '2', '-hls_segment_type', 'mpegts', '-hls_segment_filename', `${ouPath}/%v_data%02d.ts`, '-master_pl_name', `index.m3u8`,
 '-var_stream_map', 'v:0,a:0 v:1,a:1 v:2,a:2 v:3,a:3', `${ouPath}/stream_%v.m3u8`
 ]);
 this.ffmpeg_exec = spawn(this.conf.ffmpeg, argv)



index.m3u8 :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM- 
INF:BANDWIDTH=5605600,RESOLUTION=1920x1080,CODECS="avc1.64002a,mp4a.40.2"
stream_0.m3u8

#EXT-X-STREAM- 
INF:BANDWIDTH=3405600,RESOLUTION=1280x720,CODECS="avc1.640020,mp4a.40.2"
stream_1.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1205600,RESOLUTION=800x480,CODECS="avc1.64001f,mp4a.40.2"
stream_2.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=765600,RESOLUTION=640x360,CODECS="avc1.64001f,mp4a.40.2"
stream_3.m3u8



Below is streaming files from index.m3u8 with dynamic chunks


stream_0.m3u8 :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:2.400000,
0_data00.ts



stream_1.m3u8 :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:87
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:1.600000,
1_data87.ts
#EXTINF:2.400000,
1_data88.ts



stream_2.m3u8 :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:110
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:2.400000,
2_data110.ts
#EXTINF:1.600000,
2_data111.ts



stream_3.m3u8 :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:128
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:2.400000,
3_data128.ts
#EXTINF:1.600000,
3_data129.ts



-
How to convert from GL_RGB to AVFrame
11 avril 2022, par benny bFor my project I need to convert a RGB (
GL_RGB
) Image generated byglReadPixels
into aAVFrame
. I've googled it and found just examples of the other way around. But in this case I need fromGL_RGB
toAVFrame
.

Here's my code :


Here's how I set my codec :


/* Allocate resources and write header data to the output file. */
void ffmpeg_encoder_start(AVCodecID codec_id, int fps, int width, int height) {
 const AVCodec *codec;
 int ret;
 codec = avcodec_find_encoder(codec_id);
 if (!codec ) {
 std::cerr << "Codec not found" << std::endl;
 exit(1);
 }
 c = avcodec_alloc_context3(codec);
 if (!c) {
 std::cerr << "Could not allocate video codec context" << std::endl;
 exit(1);
 }
 c->bit_rate = 400000;
 c->width = width;
 c->height = height;
 c->time_base.num = 1;
 c->time_base.den = fps;
 c->keyint_min = 600;
 c->pix_fmt = AV_PIX_FMT_YUV420P;
 if (avcodec_open2(c, codec, NULL) < 0) {
 std::cerr << "Could not open codec" << std::endl;
 exit(1);
 }
 frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Could not allocate video frame" << std::endl;
 exit(1);
 }
 frame->format = c->pix_fmt;
 frame->width = c->width;
 frame->height = c->height;
 ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);
 if (ret < 0) {
 std::cerr << "Could not allocate raw picture buffer" << std::endl;
 exit(1);
 }
}



Fetching the pixels and setting the new frame :


BYTE* pixels = new BYTE[3 * DEFAULT_MONITOR.maxResolution.width * DEFAULT_MONITOR.maxResolution.height];

glReadPixels(0, 0, DEFAULT_MONITOR.maxResolution.width, DEFAULT_MONITOR.maxResolution.height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
screenSrc->setNextFrame(pixels, DEFAULT_MONITOR.maxResolution.width, DEFAULT_MONITOR.maxResolution.height);



And the function that I have for the conversion :


static void ffmpeg_encoder_set_frame_yuv_from_rgb(uint8_t *rgb) {
 const int in_linesize[1] = { 3 * c->width };
 sws_context = sws_getCachedContext(sws_context,
 c->width, c->height, AV_PIX_FMT_RGB24,
 c->width, c->height, AV_PIX_FMT_YUV420P,
 0, 0, 0, 0);
 sws_scale(sws_context, (const uint8_t * const *)&rgb, in_linesize, 0,
 c->height, frame->data, frame->linesize);
}



All the code can be found here
Here is the line that results in segmentation fault.


Unfortunately, the function gives me a segmentation fault. Do you have an idea how to solve this problem ?


-
How to Transcode ALL Audio streams from input to output using ffmpeg ?
24 novembre 2022, par user1940163I have an input MPEG TS file 'unit_test.ts'. This file has following content (shown by ffprobe) :


Input #0, mpegts, from 'unit_test.ts':
 Duration: 00:00:57.23, start: 73674.049844, bitrate: 2401 kb/s
 Program 1
 Metadata:
 service_name : Service01
 service_provider: FFmpeg
 Stream #0:0[0x31]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 852x480 [SAR 640:639 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
 Stream #0:1[0x34](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 448 kb/s
 Stream #0:2[0x35](spa): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s



I want to convert it into another MPEG TS file. Requirement is that the Video stream of the input should be directly copied to the output whereas ALL the audio streams should be transcoded "aac" format.


I tried this command :


ffmpeg -i unit_test.ts -map 0 -c copy -c:a aac maud_test.ts


It converted it into 'maud_test.ts' with following contents (shown by ffprobe)


Input #0, mpegts, from 'maud_test.ts':
 Duration: 00:00:57.25, start: 1.400000, bitrate: 2211 kb/s
 Program 1
 Metadata:
 service_name : Service01
 service_provider: FFmpeg
 Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 852x480 [SAR 640:639 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
 Stream #0:1[0x101](eng): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 6 channels, fltp, 391 kb/s
 Stream #0:2[0x102](spa): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 133 kb/s



So it appeared as if the command worked....However when I play the maud_test.ts file in vlc player I can see both audio streams listed in the menu ; but Stream 1 (eng) remains silent............whereas Stream 2 (spa) has proper audio. (Original TS file has both audio streams properly audible)


I have tried this with different input files and have seen that same problem occurs in each case.


What that I am doing is not right ?


How should I get this done ? (I can write explicit stream by stream map and channel arguments to get that done ; however I want the command line to be generic, in that the input file could be having any configuration with one Video and several Audios with different formats. The configuration will not be known beforehand.)