
Recherche avancée
Autres articles (55)
-
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (4405)
-
How to create video from collection of jpg in using ffmpeg library
4 novembre 2016, par user3214224I’m new for ffmpeg. I try to create avi video from one single jpg image using ffmpeg libraries. but, created video image not proper. I have attached two images one is actual image "img.jpg" another one is code gave output video image.
In command line it’s working perfectly :
ffmpeg -i img.jpg img.avi
Thanks
int flush_encoder(AVFormatContext *fmt_ctx,unsigned int stream_index)
int ret;
int got_frame;
AVPacket enc_pkt;
if (!(fmt_ctx->streams[stream_index]->codec->codec->capabilities &
CODEC_CAP_DELAY))
return 0;
while (1) {
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = avcodec_encode_video2 (fmt_ctx->streams[stream_index]->codec, &enc_pkt,
NULL, &got_frame);
av_frame_free(NULL);
if (ret < 0)
break;
if (!got_frame){
ret=0;
break;
}
printf("Flush Encoder: Succeed to encode 1 frame!\tsize:%5d\n",enc_pkt.size);
/* mux encoded frame */
ret = av_write_frame(fmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
return ret;int main (int argc, char *argv[])
Gtk::Main kit(argc, argv);
AVFormatContext* pFormatCtx;
AVOutputFormat* fmt;
AVStream* video_st;
AVCodecContext* pCodecCtx;
AVCodec* pCodec;
AVPacket pkt;
uint8_t* picture_buf;
AVFrame* pFrame;
int picture_size;
int y_size;
int framecnt=0;
FILE *in_file = fopen("img.jpg", "rb"); //Input raw YUV data
int in_w=1024,in_h=768; //Input data's width and height
int framenum=100;
const char* out_file = "out.avi";
av_register_all();
//Method1.
pFormatCtx = avformat_alloc_context();
//Guess Format
fmt = av_guess_format(NULL, out_file, NULL);
pFormatCtx->oformat = fmt;
//Open output URL
if (avio_open(&pFormatCtx->pb,out_file, AVIO_FLAG_READ_WRITE) < 0){
printf("Failed to open output file! \n");
return -1;
}
video_st = avformat_new_stream(pFormatCtx, 0);
video_st->time_base.num = 1;
video_st->time_base.den = 25;
if (video_st==NULL){
return -1;
}
//Param that must set
pCodecCtx = video_st->codec;
pCodecCtx->codec_id = fmt->video_codec;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
pCodecCtx->width = in_w;
pCodecCtx->height = in_h;
pCodecCtx->time_base.num = 1;
pCodecCtx->time_base.den = 25;
pCodecCtx->bit_rate = 400000;
pCodecCtx->gop_size=250;
pCodecCtx->qmin = 2;
pCodecCtx->qmax = 31;
//Optional Param
pCodecCtx->max_b_frames=3;
// Set Option
AVDictionary *param = 0;
//H.264
if(pCodecCtx->codec_id == AV_CODEC_ID_H264) {
av_dict_set(&param, "preset", "slow", 0);
av_dict_set(&param, "tune", "zerolatency", 0);
}
//H.265
if(pCodecCtx->codec_id == AV_CODEC_ID_H265){
av_dict_set(&param, "preset", "ultrafast", 0);
av_dict_set(&param, "tune", "zero-latency", 0);
}
//Show some Information
av_dump_format(pFormatCtx, 0, out_file, 1);
pCodec = avcodec_find_encoder(pCodecCtx->codec_id);
if (!pCodec){
printf("Can not find encoder! \n");
return -1;
}
if (avcodec_open2(pCodecCtx, pCodec,&param) < 0){
printf("Failed to open encoder! \n");
return -1;
}
pFrame = av_frame_alloc();
picture_size = avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
picture_buf = (uint8_t *)av_malloc(picture_size);
avpicture_fill((AVPicture *)pFrame, picture_buf, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
//Write File Header
avformat_write_header(pFormatCtx,NULL);
av_new_packet(&pkt,picture_size);
y_size = pCodecCtx->width * pCodecCtx->height;
for (int i=0; i/Read raw YUV data
if (fread(picture_buf, 1, y_size*3/2, in_file) <= 0){
printf("Failed to read raw data! \n");
return -1;
}else if(feof(in_file)){
break;
}
pFrame->data[0] = picture_buf; // Y
pFrame->data[1] = picture_buf+ y_size; // U
pFrame->data[2] = picture_buf+ y_size*5/4; // V
//PTS
pFrame->pts=i;
int got_picture=0;
//Encode
int ret = avcodec_encode_video2(pCodecCtx, &pkt,pFrame, &got_picture);
if(ret < 0){
printf("Failed to encode! \n");
return -1;
}
if (got_picture==1){
printf("Succeed to encode frame: %5d\tsize:%5d\n",framecnt,pkt.size);
framecnt++;
pkt.stream_index = video_st->index;
ret = av_write_frame(pFormatCtx, &pkt);
av_free_packet(&pkt);
}
}
//Flush Encoder
int ret = flush_encoder(pFormatCtx,0);
if (ret < 0) {
printf("Flushing encoder failed\n");
return -1;
}
//Write file trailer
av_write_trailer(pFormatCtx);
//Clean
if (video_st){
avcodec_close(video_st->codec);
av_free(pFrame);
av_free(picture_buf);
}
avio_close(pFormatCtx->pb);
avformat_free_context(pFormatCtx);
fclose(in_file);
return 0; -
Videojs does not play converted mp4 file
4 février 2017, par Chris Palmer BreuerI am creating a platform for video files to be uploaded to. In order for them to get played through Videojs, I convert/demux mkv/flv/3gp files to mp4 files.
The issue I encounter is that if I demux such video/mkv file, I get the error message
Video format or mime type not supported
, even though it is a "working" mp4 file on my computer.I think I understand that mkv files are containers, and that if demuxed it keeps the same video and audio codec and if that is not supported by Videojs/HTML5 the video cannot get played. Please, correct me if I am wrong.
Can anyone please tell me why this demux of mkv.mkv to mkv.mp4 will not play in my browser ?
➜ ~ ffmpeg -i mkv.mkv -vcodec copy -acodec copy mkv.mp4
ffmpeg version 3.2.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Input #0, matroska,webm, from 'mkv.mkv':
Metadata:
ENCODER : Lavf53.24.2
Duration: 00:00:34.08, start: -1.400000, bitrate: 1232 kb/s
Stream #0:0: Video: mpeg4 (Simple Profile), yuv420p, 720x480 [SAR 1:1 DAR 3:2], 25 fps, 25 tbr, 1k tbn, 25 tbc (default)
Stream #0:1: Audio: aac (LC), 48000 Hz, 5.1, fltp (default)
Output #0, mp4, to 'mkv.mp4':
Metadata:
encoder : Lavf57.56.100
Stream #0:0: Video: mpeg4 (Simple Profile) ( [0][0][0] / 0x0020), yuv420p, 720x480 [SAR 1:1 DAR 3:2], q=2-31, 25 fps, 25 tbr, 16k tbn, 1k tbc (default)
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, 5.1 (default)
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 887 fps=0.0 q=-1.0 Lsize= 5130kB time=00:00:35.45 bitrate=1185.4kbits/s speed= 668x
video:3447kB audio:1663kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.401607%Thanks for all the help already. I cant seem to find an answer...
-
ffmpeg livestream only shows one frame at a time
28 octobre 2016, par user3308335So, I’ve tried to turn one of my pis into a silly "baby cam" for my pet and I followed the tutorial made by Ustream.tv on how to do this.
This is the script I run to start the stream :
#!/bin/bash
RTMP_URL=<rtmpurl>
STREAM_KEY=<streamkey>
while :
do
raspivid -n -hf -t 0 -w 640 -h 480 -fps 15 -b 400000 -o - | ffmpeg -i - -vcodec copy -an -f flv $RTMP_URL/$STREAM_KEY
sleep 2
done
</streamkey></rtmpurl>However, whenever I go to view the stream, the stream shows only one frame. The same frame until I refresh the browser, watch the ad again, and then it’s a new same frame that it’ll show.
Does anyone have an idea why this might be happening or any troubleshooting tricks for me to try ?