
Recherche avancée
Autres articles (75)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (6816)
-
Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)
26 mars, par Dave94I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.
However when I run my command with
subprocess.Popen()
it returns-11
which translates to SIGSEGV (segmentation fault).
I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

If I download the video to the Lambda's
/tmp
directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the/tmp
directory's max. size is only 512 MB which is not quite enough for me.

The relevant code which returns SIGSEGV :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + s3_source_signed_url + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints -11



stderr of FFmpeg :


ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443
[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443
[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 creation_time : 2015-09-02T07:42:42.000000Z
 Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s
 Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Video Handler
 encoder : AVC Coding
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Audio Handler
[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.
Input #1, mp3, from '/opt/bin/audio.mp3':
 Metadata:
 encoded_by : Logic Pro X
 date : 2021-01-03
 coding_history : 
 time_reference : 158760000
 umid : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4
 encoder : Lavf58.49.100
 Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s
 Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
Input #2, png_pipe, from '/opt/bin/watermark.png':
 Duration: N/A, bitrate: N/A
 Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc
[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:'bilinear' interl:0
Stream mapping:
 Stream #0:0 (h264) -> scale
 Stream #2:0 (png) -> overlay:overlay
 format -> Stream #0:0 (libx264)
 Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p
[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:'bilinear' interl:0
[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2
[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:'bilinear' interl:0
[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter 'auto_scaler_0' between the filter 'graph 0 input from stream 2:0' and the filter 'deinterlace_in_2_0'
[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF
[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9
[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2
[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2
[libx264 @ 0x72c1c00] using SAR=1/1
[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit
[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'pipe:':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 encoder : Lavf58.20.100
 Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)
 Metadata:
 encoder : Lavc58.35.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1
 Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
frame= 27 fps=0.0 q=32.0 size= 247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x
frame= 77 fps= 77 q=27.0 size= 1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x
frame= 126 fps= 83 q=25.0 size= 2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x
frame= 177 fps= 87 q=26.0 size= 3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x
frame= 225 fps= 88 q=25.0 size= 4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x
frame= 272 fps= 89 q=27.0 size= 6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x
frame= 320 fps= 90 q=27.0 size= 7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x
frame= 372 fps= 91 q=26.0 size= 8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x



And that's the end of it. It should continue to do the processing until
00:04:02
as that's my audio's length but it stops here every time (approximately this is my video length).

The relevant code which works correctly :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + '/tmp/' + s3_source_key + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints 0



With this code it repeats the video as many times as it has to do to be as long as the audio.


Both versions work correctly on my computer.


This question is almost the same but in my case FFmpeg is able to access the signed URL.


-
FFMpeg RGB32 to NV12 using SWScale
28 avril 2016, par KevinAI’m trying to convert RGB32 frames to NV12 Frames to feed into an encoder.
m_iWidthIn = 1920;
m_iHeightIn = 1080;
m_iWidthOut = (((iWidthIn + 31) >> 5) << 5) //32bit align
m_heightOut = (((iHeightIn + 31) >> 5) << 5) //32bit align
m_outputPixelFormat = AV_PIX_FMT_NV12;
// allocate and fill buffers
m_sws = ::sws_getContext(m_iWidthIn, m_iHeightIn, AV_PIX_FMT_RGB32, m_iWidthOut, m_iHeightOut, m_outputPixelFormat, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
AVFrame* frameOut = av_frame_alloc();
frameOut->height = m_iHeightOut;
frameOut->width = m_iWidthOut;
frameOut->format = m_outputPixelFormat;
av_frame_get_buffer(frameOut, 32);
int linesize[1] = { m_iWidthIn * 4 };
uint8_t * data[1] = { m_inputBuffer };
if (m_bFlip)
{
data[0] += linesize[0] * (m_iHeightIn - 1);
linesize[0] = -linesize[0];
}
::sws_scale(m_sws, data, linesize, 0, m_iHeightIn, frameOut->data, frameOut->linesize);
::av_image_copy_to_buffer(pOutputBuffer, lDataLen, frameOut->data, frameOut->linesize, m_outputPixelFormat, m_iWidthOut, m_iHeightOut, 32);If I make m_outputPixelFormat AV_PIX_FMT_RGB32 and use a DMO colorspace converter, the video comes out correctly. However if I change it to NV12, I end up with a slanted video with missing data at the bottom.
I know this is caused by me copying the data incorrectly out of the buffer, but I’m unsure what I’m doing incorrectly. -
FFMpeg muxing h264 to mp4 resultant file is not running
11 mars 2016, par M.TahaI am writing code to mux h264 file into mp4 by using latest FFMPEG 3.0. Muxing is working but the resultant mp4 file is not showing vedio, only the time is running when i play the resultant mp4 file. Kindly try to help to solve this. Please tell me what i am missing. Here is my code :
int main()
{
avcodec_register_all();
av_register_all();
// define AVOutputFormat
AVOutputFormat *avoutputFormat = NULL;
avoutputFormat = av_guess_format("mp4", NULL, NULL);
if (!avoutputFormat) {
fprintf(stderr, "Could not find suitable output format\n");
return 1;
}
AVFormatContext *avoutFmtCtx = NULL;
/*Below initialize AVFormatContext.*/
avformat_alloc_output_context2(&avoutFmtCtx, avoutputFormat, NULL, NULL);
avoutFmtCtx->oformat = avoutputFormat;
sprintf_s(avoutFmtCtx->filename, "%s", "Out.mp4"); /*Filling output file name..*/
if (avoutputFormat->video_codec == AV_CODEC_ID_NONE)
printf("\n Unsupported format...");
AVStream * avoutStrm = avformat_new_stream(avoutFmtCtx, 0);
if (!avoutStrm) {
_tcprintf(_T("FFMPEG: Could not alloc video stream\n"));
}
AVCodecContext *c = avoutStrm->codec;
c->codec_id = avoutputFormat->video_codec;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = 2000*1000;
c->width = 1920;
c->height = 1080;
AVRational avr;
avr.den = 30;
avr.num = 1;
avoutStrm->time_base = c->time_base = av_add_q(avoutStrm->codec->time_base, avr);
// Some formats want stream headers to be separate
if(avoutFmtCtx->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
// Open the output container file
if (avio_open(&avoutFmtCtx->pb, avoutFmtCtx->filename, AVIO_FLAG_WRITE) < 0)
{
_tcprintf(_T("FFMPEG: Could not open '%s'\n"), avoutFmtCtx->filename);
}
m_pExtDataBuffer = (uint8_t*)av_malloc(1000 + 1000);
if(!m_pExtDataBuffer) {
_tcprintf(_T("FFMPEG: could not allocate required buffer\n"));
}
uint8_t SPSbuf[1000];
uint8_t PPSbuf[1000];
memcpy(m_pExtDataBuffer, SPSbuf, 1000);
memcpy(m_pExtDataBuffer + 1000, PPSbuf, 1000);
/* Codec "extradata" conveys the H.264 stream SPS and PPS info (MPEG2: sequence header is housed in SPS buffer, PPS buffer is empty)*/
c->extradata = m_pExtDataBuffer;
c->extradata_size = 1000 + 1000;
if(avformat_write_header(avoutFmtCtx,NULL)) {
_tcprintf(_T("FFMPEG: avformat_write_header error!\n"));
}
/* Here do writing data in loop...*/
int m_nProcessedFramesNum = 0;
while(1)
{
++m_nProcessedFramesNum;
AVPacket pkt;
av_init_packet(&pkt);
AVCodecContext *c = avoutStrm->codec;
avoutStrm->pts.val = m_nProcessedFramesNum;
pkt.stream_index = avoutStrm->index;
pkt.data = /*Filling h.264 data from here... This is valid h264 data*/
pkt.size = /*Filling valid h264 data size here...*/
av_new_packet(&pkt,pkt.size);
pkt.pts = m_nProcessedFramesNum*512;
pkt.dts = m_nProcessedFramesNum*512;
pkt.duration = 512;
// Write the compressed frame in the media file
if (av_interleaved_write_frame(avoutFmtCtx, &pkt)) {
_tcprintf(_T("FFMPEG: Error while writing video frame\n"));
}/*End of loop.*/
av_free_packet(&pkt);
}
av_write_trailer(avoutFmtCtx);
avio_close(avoutFmtCtx->pb);
avformat_free_context(avoutFmtCtx);
}