
Recherche avancée
Autres articles (34)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (3880)
-
Concatenate Videos of Differing Resolution and Framerate
11 mars 2019, par user5507535I was going to use the following command
ffmpeg -i intro.mp4 -i cJfjHCA_ccc-2019-03-10T15\:30\:01-0400.inprogress.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" work.mp4
to prepend an intro to a video file. I didn’t realize until I attempted production that I don’t actually know what the video file’s resolution or framerate will be.
It’s important that the intro play at the appropriate speed so the visuals and music appear as intended, but I want the video file’s footage to be untouched. How can I do this ? Should I modify my existing command or start from a different approach ?
-
Ffmpeg change resolution h.264
9 janvier 2019, par PUBG Player Unknown BattlegrouI need to add a option to ffmpeg command to change the video resolution to 5760x2880 with keeping the right aspect ratio so the video isn’t distorted my current command is
-i {INPUTFILE} -c:v libx264 -preset ultrafast -crf 0 -r:v 30 -c:a copy {OUTPUTFILE}
Thank you
-
Changing Resolution in ffmpeg output to display video stream in Android
23 janvier 2015, par vgokul129I’m trying to change the camera output(Resolution:640 x 480),to 1024 x 720 and render the video frames in the Android screen.Is it possible to do this video conversion with ffmpeg and SDL libraries ?If so is there any API’s avaliable in ffmepeg codec libraries to do the same ?
Here is the code where i got the output for 640 x 480 resolution :
//Registering all the formats:
av_register_all();
AVFormatContext *pFormatCtx=NULL;
int i, videoStream;
AVCodecContext *pCodecCtx=NULL;
AVCodec *pCodec=NULL;
AVFrame *pFrame;
AVPacket packet;
int frameFinished;
SDL_Texture *bmp;
SDL_Renderer *renderer;
SDL_Window *screen;
if (SDL_Init(SDL_INIT_VIDEO)) {
LOGD( "Could not initialize SDL - %s\n", SDL_GetError());
SDL_Quit();
exit(1);
}
LOGD(" SDL Initialized..");
screen = SDL_CreateWindow("Window", SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED, 0, 0,
SDL_WINDOW_SHOWN | SDL_WINDOW_FULLSCREEN);
LOGD("SDL Screen Created ..");
renderer = SDL_CreateRenderer(screen,-1,SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE);
LOGD("Rendering Created...");
bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_IYUV,SDL_TEXTUREACCESS_STREAMING,640,480);
LOGD("Texture created;");
SDL_RenderSetLogicalSize(renderer,640,480);
// Open video file
if(avformat_open_input(&pFormatCtx,"Filename", NULL,NULL)!=0)
LOGD("Cannot open the File");
pFormatCtx->interrupt_callback.callback = decode_interrupt_cb;
// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx,NULL)<0)
LOGD("Cannot retrive Stream info");
// Couldn't find stream information
videoStream=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
LOGD("Cannot find Video Stream:");
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) {
fprintf(stderr, "Unsupported codec!\n");
LOGD("Unable to find the decoder");
// Codec not found
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)
LOGD("Unable to OPEN Codec");
// Could not open codec
// Allocate video frame
pFrame=avcodec_alloc_frame();
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,&packet);
// Did we get a video frame?
if(frameFinished) {
//----------------Code for Displaying
SDL_UpdateYUVTexture(bmp, NULL, pFrame->data[0],
pFrame->linesize[0], pFrame->data[1], pFrame->linesize[1],
pFrame->data[2], pFrame->linesize[2]);
retcl = SDL_RenderClear(renderer);
retcopy = SDL_RenderCopy(renderer, bmp, NULL, NULL);
SDL_RenderPresent(renderer);
//-----------------
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Free the RGB image
av_free(buffer);
// Free the YUV frame
av_free(pFrame);
// Close the codec
avcodec_close(pCodecCtx);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
}