
Recherche avancée
Autres articles (57)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (8893)
-
How to create video from collection of jpg in using ffmpeg library
4 novembre 2016, par user3214224I’m new for ffmpeg. I try to create avi video from one single jpg image using ffmpeg libraries. but, created video image not proper. I have attached two images one is actual image "img.jpg" another one is code gave output video image.
In command line it’s working perfectly :
ffmpeg -i img.jpg img.avi
Thanks
int flush_encoder(AVFormatContext *fmt_ctx,unsigned int stream_index)
int ret;
int got_frame;
AVPacket enc_pkt;
if (!(fmt_ctx->streams[stream_index]->codec->codec->capabilities &
CODEC_CAP_DELAY))
return 0;
while (1) {
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = avcodec_encode_video2 (fmt_ctx->streams[stream_index]->codec, &enc_pkt,
NULL, &got_frame);
av_frame_free(NULL);
if (ret < 0)
break;
if (!got_frame){
ret=0;
break;
}
printf("Flush Encoder: Succeed to encode 1 frame!\tsize:%5d\n",enc_pkt.size);
/* mux encoded frame */
ret = av_write_frame(fmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
return ret;int main (int argc, char *argv[])
Gtk::Main kit(argc, argv);
AVFormatContext* pFormatCtx;
AVOutputFormat* fmt;
AVStream* video_st;
AVCodecContext* pCodecCtx;
AVCodec* pCodec;
AVPacket pkt;
uint8_t* picture_buf;
AVFrame* pFrame;
int picture_size;
int y_size;
int framecnt=0;
FILE *in_file = fopen("img.jpg", "rb"); //Input raw YUV data
int in_w=1024,in_h=768; //Input data's width and height
int framenum=100;
const char* out_file = "out.avi";
av_register_all();
//Method1.
pFormatCtx = avformat_alloc_context();
//Guess Format
fmt = av_guess_format(NULL, out_file, NULL);
pFormatCtx->oformat = fmt;
//Open output URL
if (avio_open(&pFormatCtx->pb,out_file, AVIO_FLAG_READ_WRITE) < 0){
printf("Failed to open output file! \n");
return -1;
}
video_st = avformat_new_stream(pFormatCtx, 0);
video_st->time_base.num = 1;
video_st->time_base.den = 25;
if (video_st==NULL){
return -1;
}
//Param that must set
pCodecCtx = video_st->codec;
pCodecCtx->codec_id = fmt->video_codec;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
pCodecCtx->width = in_w;
pCodecCtx->height = in_h;
pCodecCtx->time_base.num = 1;
pCodecCtx->time_base.den = 25;
pCodecCtx->bit_rate = 400000;
pCodecCtx->gop_size=250;
pCodecCtx->qmin = 2;
pCodecCtx->qmax = 31;
//Optional Param
pCodecCtx->max_b_frames=3;
// Set Option
AVDictionary *param = 0;
//H.264
if(pCodecCtx->codec_id == AV_CODEC_ID_H264) {
av_dict_set(&param, "preset", "slow", 0);
av_dict_set(&param, "tune", "zerolatency", 0);
}
//H.265
if(pCodecCtx->codec_id == AV_CODEC_ID_H265){
av_dict_set(&param, "preset", "ultrafast", 0);
av_dict_set(&param, "tune", "zero-latency", 0);
}
//Show some Information
av_dump_format(pFormatCtx, 0, out_file, 1);
pCodec = avcodec_find_encoder(pCodecCtx->codec_id);
if (!pCodec){
printf("Can not find encoder! \n");
return -1;
}
if (avcodec_open2(pCodecCtx, pCodec,&param) < 0){
printf("Failed to open encoder! \n");
return -1;
}
pFrame = av_frame_alloc();
picture_size = avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
picture_buf = (uint8_t *)av_malloc(picture_size);
avpicture_fill((AVPicture *)pFrame, picture_buf, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
//Write File Header
avformat_write_header(pFormatCtx,NULL);
av_new_packet(&pkt,picture_size);
y_size = pCodecCtx->width * pCodecCtx->height;
for (int i=0; i/Read raw YUV data
if (fread(picture_buf, 1, y_size*3/2, in_file) <= 0){
printf("Failed to read raw data! \n");
return -1;
}else if(feof(in_file)){
break;
}
pFrame->data[0] = picture_buf; // Y
pFrame->data[1] = picture_buf+ y_size; // U
pFrame->data[2] = picture_buf+ y_size*5/4; // V
//PTS
pFrame->pts=i;
int got_picture=0;
//Encode
int ret = avcodec_encode_video2(pCodecCtx, &pkt,pFrame, &got_picture);
if(ret < 0){
printf("Failed to encode! \n");
return -1;
}
if (got_picture==1){
printf("Succeed to encode frame: %5d\tsize:%5d\n",framecnt,pkt.size);
framecnt++;
pkt.stream_index = video_st->index;
ret = av_write_frame(pFormatCtx, &pkt);
av_free_packet(&pkt);
}
}
//Flush Encoder
int ret = flush_encoder(pFormatCtx,0);
if (ret < 0) {
printf("Flushing encoder failed\n");
return -1;
}
//Write file trailer
av_write_trailer(pFormatCtx);
//Clean
if (video_st){
avcodec_close(video_st->codec);
av_free(pFrame);
av_free(picture_buf);
}
avio_close(pFormatCtx->pb);
avformat_free_context(pFormatCtx);
fclose(in_file);
return 0; -
Encoding RGB frames using x264 and AVCodec in C
6 novembre 2016, par deepworkI have RGB24 frames streamed from camera and i want to encode them into h264 ,i found that AVCodec and x264 can do so, the problem is x264 as default accepts YUV420 as input so what i wrote was a program which convert RGB frames to YUV420 .that was by sws_scale function .this works well except that it does not satisfy the required FPS because the converting (RGB->YUV420) takes time.
This is how i setup my encoder context :
videoStream->id = 0;
vCodecCtx = videoStream->codec;
vCodecCtx->coder_type = AVMEDIA_TYPE_VIDEO;
vCodecCtx->codec_id = AV_CODEC_ID_H264;
vCodecCtx->bit_rate = 400000;
vCodecCtx->width = Width;
vCodecCtx->height = Height;
vCodecCtx->time_base.den = FPS;
vCodecCtx->time_base.num = 1;
//vCodecCtx->time_base = (AVRational){1,};
vCodecCtx->gop_size = 12;
vCodecCtx->max_b_frames = 1;
vCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
if(formatCtx->oformat->flags & AVFMT_GLOBALHEADER)
vCodecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;
av_opt_set(vCodecCtx->priv_data, "preset", "ultrafast", 0);
av_opt_set(vCodecCtx->priv_data, "profile", "baseline", AV_OPT_SEARCH_CHILDREN);
if (avcodec_open2(vCodecCtx, h264Codec, NULL) < 0){
return 0;
}when i changes AV_PIX_FMT_YUV420P to AV_PIX_FMT_RGB24 ,avcodec_open2 will fail.
i read that there is a version of libx264 for RGB called libx264rgb but i even dont know whether i have to rebuild x264 with enabling this option or to download another source or i have to do it programmatically with the first x264 lib.the question is how to enable RGB as input to libx264 to use with libavcodec in C .or how to make the encoding or sws_scale more fast .
Edit :
How i built ffmpeg :
NDK=D:/AndroidDev/android-ndk-r9
PLATFORM=$NDK/platforms/android-18/arch-arm/
PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/windows-x86_64
GENERAL="\
--enable-small \
--enable-cross-compile \
--extra-libs="-lgcc" \
--arch=arm \
--cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
--cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
--nm=$PREBUILT/bin/arm-linux-androideabi-nm \
--extra-cflags="-I../x264/android/arm/include" \
--extra-ldflags="-L../x264/android/arm/lib" "
MODULES="\
--enable-gpl \
--enable-libx264"
function build_ARMv6
{
./configure \
--target-os=linux \
--prefix=./android/armeabi \
${GENERAL} \
--sysroot=$PLATFORM \
--enable-shared \
--disable-static \
--extra-cflags=" -O3 -fpic -fasm -Wno-psabi -fno-short-enums -fno-strict-aliasing -finline-limit=300 -mfloat-abi=softfp -mfpu=vfp -marm -march=armv6" \
--extra-ldflags="-lx264 -Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog" \
--enable-zlib \
${MODULES} \
--disable-doc \
--enable-neon
make clean
make
make install
}
build_ARMv6
echo Android ARMEABI builds finishedHow i built x264 :
NDK=D:/AndroidDev/android-ndk-r9
PLATFORM=$NDK/platforms/android-18/arch-arm/
TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/windows-x86_64
PREFIX=./android/arm
function build_one
{
./configure \
--prefix=$PREFIX \
--enable-static \
--enable-pic \
--host=arm-linux \
--cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \
--sysroot=$PLATFORM
make clean
make
make install
}
build_one
echo Android ARM builds finished -
duration change after transcode ts
1er mai 2017, par Feilong Luoi have a problem about transcode with ffmpeg
i want to cover m3u8 to mp4, so i transcode every ts file first, and then concat them to a mp4, but i found that the duration will be bigger than source file.
source file is :
http://oc7iy3eta.bkt.clouddn.com/src_20.tsafter transcode, test file is :
http://oc7iy3eta.bkt.clouddn.com/test_20.tsi use the command as bellow to change to 5fps, and 400k bitrate :
sudo ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -nostdin -y -v warning -i ./src_20.ts -threads 3 -movflags faststart -metadata:s:v rotate=0 -chunk_duration 520000 -video_track_timescale 25000 -pix_fmt yuv420p -copytb 1 -vcodec libx264 -b:v 400000 -minrate 400000 -maxrate 400000 -bufsize 500k -force_key_frames "expr:gte(t,n_forced*2)" -vsync 1 -r 5 -s 544*960 -acodec libfaac -async 1 ./test_20.ts
i use ffprobe command to see video info :
source file info :
Duration : 00:00:01.26, start : 28.346989, bitrate : 921 kb/s
Program 1
Metadata :
service_name : Service01
service_provider : FFmpeg
Stream #0:0[0x100] : Audio : aac ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 23 kb/s
Stream #0:1[0x101] : Video : h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 544x960, 10.67 tbr, 90k tbn, 180k tbctest file :
Input #0, mpegts, from ’test_20.ts’ :
Duration : 00:00:01.62, start : 1.576778, bitrate : 447 kb/s
Program 1
Metadata :
service_name : Service01
service_provider : FFmpeg
Stream #0:0[0x100] : Video : h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 544x960, 5 fps, 5 tbr, 90k tbn, 10 tbc
Stream #0:1[0x101] : Audio : aac ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 5 kb/s=======================================================================
question
so , we can see that the duration of src file is 1.26s , but after transcode, the test file is 1.62s.
why ? can anybody help