
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (64)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (11054)
-
Missing functions in new version of FFmpeg
25 avril 2013, par AsikI am upgrading a wrapper around ffmpeg to the most recent version of the library. Some functions have been renamed or replaced. For the most part I've been able to found equivalents easily, but the following two are giving me trouble :
av_abort_set_callback
av_abort_default_callbackI could not find these functions by searching online, so what happened to them and what should they be replaced with now ?
Here is a copy of the header file "abort.h" where they were located :
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVUTIL_ABORT_H
#define AVUTIL_ABORT_H
/**
* Causes abnormal program termination. By default, av_abort_callback calls
* abort() from the stdlib. This behavior can be altered by setting a
* different av_abort callback function.
*/
void av_abort_internal(void);
void av_abort_set_callback(void (*)(void));
void av_abort_default_callback(void);
#define av_abort() do { av_log(NULL, AV_LOG_ERROR, "Abort at %s:%d\n", __FILE__, __LINE__); av_abort_internal(); } while(0)
#endif /* AVUTIL_ABORT_H */ -
ffmpeg record screen and save video file to disk as .mpg
7 janvier 2015, par musimbateI want to record the screen of my pc (using gdigrab on my windows machine) and store the saved video file on my disk as an mp4 or mpg file .I have found an example piece of code that grabs the screen and shows it in an SDL window here :http://xwk.iteye.com/blog/2125720 (The code is on the bottom of the page and has an english version) and the ffmpeg muxing example https://ffmpeg.org/doxygen/trunk/muxing_8c-source.html seems to be able to help encode audio and video into a desired output video file.
I have tried to combine these two by having a format context for grabbing the screen (AVFormatContext *pFormatCtx ; in my code ) and a separate format context to write the desired video file (AVFormatContext *outFormatContextEncoded ;).Within the loop to read packets from the input stream( screen grab stream) I directly encode write packets to the output file as shown in my code.I have kept the SDL code so I can see what I am recording.Below is my code with my modified write_video_frame() function .
The code builds OK but the output video can’t be played by vlc. When I run the command
ffmpeg -i filename.mpg
I get this output
[mpeg @ 003fed20] probed stream 0 failed
[mpeg @ 003fed20] Stream #0: not enough frames to estimate rate; consider increasing probesize
[mpeg @ 003fed20] Could not find codec parameters for stream 0 (Video: none): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
karamage.mpg: could not find codec parameters
Input #0, mpeg, from 'karamage.mpg':
Duration: 19:30:09.25, start: 37545.438756, bitrate: 2 kb/s
Stream #0:0[0x1e0]: Video: none, 90k tbr, 90k tbn
At least one output file must be specifiedAm I doing something wrong here ? I am new to ffmpeg and any guidance on this is highly appreciated.Thank you for your time.
int main(int argc, char* argv[])
{
AVFormatContext *pFormatCtx;
int i, videoindex;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
av_register_all();
avformat_network_init();
//Localy defined structure.
OutputStream outVideoStream = { 0 };
const char *filename;
AVOutputFormat *outFormatEncoded;
AVFormatContext *outFormatContextEncoded;
AVCodec *videoCodec;
filename="karamage.mpg";
int ret1;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
//ASSIGN STH TO THE FORMAT CONTEXT.
pFormatCtx = avformat_alloc_context();
//
//Use this when opening a local file.
//char filepath[]="src01_480x272_22.h265";
//avformat_open_input(&pFormatCtx,filepath,NULL,NULL)
//Register Device
avdevice_register_all();
//Use gdigrab
AVDictionary* options = NULL;
//Set some options
//grabbing frame rate
//av_dict_set(&options,"framerate","5",0);
//The distance from the left edge of the screen or desktop
//av_dict_set(&options,"offset_x","20",0);
//The distance from the top edge of the screen or desktop
//av_dict_set(&options,"offset_y","40",0);
//Video frame size. The default is to capture the full screen
//av_dict_set(&options,"video_size","640x480",0);
AVInputFormat *ifmt=av_find_input_format("gdigrab");
if(avformat_open_input(&pFormatCtx,"desktop",ifmt,&options)!=0){
printf("Couldn't open input stream.\n");
return -1;
}
if(avformat_find_stream_info(pFormatCtx,NULL)<0)
{
printf("Couldn't find stream information.\n");
return -1;
}
videoindex=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
{
videoindex=i;
break;
}
if(videoindex==-1)
{
printf("Didn't find a video stream.\n");
return -1;
}
pCodecCtx=pFormatCtx->streams[videoindex]->codec;
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
{
printf("Codec not found.\n");
return -1;
}
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)
{
printf("Could not open codec.\n");
return -1;
}
AVFrame *pFrame,*pFrameYUV;
pFrame=avcodec_alloc_frame();
pFrameYUV=avcodec_alloc_frame();
//PIX_FMT_YUV420P WHAT DOES THIS SAY ABOUT THE FORMAT??
uint8_t *out_buffer=(uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height));
avpicture_fill((AVPicture *)pFrameYUV, out_buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
//<<<<<<<<<<<-------PREP WORK TO WRITE ENCODED VIDEO FILES-----
avformat_alloc_output_context2(&outFormatContextEncoded, NULL, NULL, filename);
if (!outFormatContextEncoded) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&outFormatContextEncoded, NULL, "mpeg", filename);
}
if (!outFormatContextEncoded)
return 1;
outFormatEncoded=outFormatContextEncoded->oformat;
//THIS CREATES THE STREAMS(AUDIO AND VIDEO) ADDED TO OUR OUTPUT STREAM
if (outFormatEncoded->video_codec != AV_CODEC_ID_NONE) {
//YOUR VIDEO AND AUDIO PROPS ARE SET HERE.
add_stream(&outVideoStream, outFormatContextEncoded, &videoCodec, outFormatEncoded->video_codec);
have_video = 1;
encode_video = 1;
}
// Now that all the parameters are set, we can open the audio and
// video codecs and allocate the necessary encode buffers.
if (have_video)
open_video(outFormatContextEncoded, videoCodec, &outVideoStream, opt);
av_dump_format(outFormatContextEncoded, 0, filename, 1);
/* open the output file, if needed */
if (!(outFormatEncoded->flags & AVFMT_NOFILE)) {
ret1 = avio_open(&outFormatContextEncoded->pb, filename, AVIO_FLAG_WRITE);
if (ret1 < 0) {
//fprintf(stderr, "Could not open '%s': %s\n", filename,
// av_err2str(ret));
fprintf(stderr, "Could not open your dumb file.\n");
return 1;
}
}
/* Write the stream header, if any. */
ret1 = avformat_write_header(outFormatContextEncoded, &opt);
if (ret1 < 0) {
//fprintf(stderr, "Error occurred when opening output file: %s\n",
// av_err2str(ret));
fprintf(stderr, "Error occurred when opening output file\n");
return 1;
}
//<<<<<<<<<<<-------PREP WORK TO WRITE ENCODED VIDEO FILES-----
//SDL----------------------------
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
printf( "Could not initialize SDL - %s\n", SDL_GetError());
return -1;
}
int screen_w=640,screen_h=360;
const SDL_VideoInfo *vi = SDL_GetVideoInfo();
//Half of the Desktop's width and height.
screen_w = vi->current_w/2;
screen_h = vi->current_h/2;
SDL_Surface *screen;
screen = SDL_SetVideoMode(screen_w, screen_h, 0,0);
if(!screen) {
printf("SDL: could not set video mode - exiting:%s\n",SDL_GetError());
return -1;
}
SDL_Overlay *bmp;
bmp = SDL_CreateYUVOverlay(pCodecCtx->width, pCodecCtx->height,SDL_YV12_OVERLAY, screen);
SDL_Rect rect;
//SDL End------------------------
int ret, got_picture;
AVPacket *packet=(AVPacket *)av_malloc(sizeof(AVPacket));
//TRY TO INIT THE PACKET HERE
av_init_packet(packet);
//Output Information-----------------------------
printf("File Information---------------------\n");
av_dump_format(pFormatCtx,0,NULL,0);
printf("-------------------------------------------------\n");
struct SwsContext *img_convert_ctx;
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
//------------------------------
//
while(av_read_frame(pFormatCtx, packet)>=0)
{
if(packet->stream_index==videoindex)
{
//HERE WE DECODE THE PACKET INTO THE FRAME
ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
if(ret < 0)
{
printf("Decode Error.\n");
return -1;
}
if(got_picture)
{
//THIS IS WHERE WE DO STH WITH THE FRAME WE JUST GOT FROM THE STREAM
//FREE AREA--START
//IN HERE YOU CAN WORK WITH THE FRAME OF THE PACKET.
write_video_frame(outFormatContextEncoded, &outVideoStream,packet);
//FREE AREA--END
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);
SDL_LockYUVOverlay(bmp);
bmp->pixels[0]=pFrameYUV->data[0];
bmp->pixels[2]=pFrameYUV->data[1];
bmp->pixels[1]=pFrameYUV->data[2];
bmp->pitches[0]=pFrameYUV->linesize[0];
bmp->pitches[2]=pFrameYUV->linesize[1];
bmp->pitches[1]=pFrameYUV->linesize[2];
SDL_UnlockYUVOverlay(bmp);
rect.x = 0;
rect.y = 0;
rect.w = screen_w;
rect.h = screen_h;
SDL_DisplayYUVOverlay(bmp, &rect);
//Delay 40ms----WHY THIS DELAY????
SDL_Delay(40);
}
}
av_free_packet(packet);
}//THE LOOP TO PULL PACKETS FROM THE FORMAT CONTEXT ENDS HERE.
//AFTER THE WHILE LOOP WE DO SOME CLEANING
//av_read_pause(context);
av_write_trailer(outFormatContextEncoded);
close_stream(outFormatContextEncoded, &outVideoStream);
if (!(outFormatContextEncoded->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_close(outFormatContextEncoded->pb);
/* free the stream */
avformat_free_context(outFormatContextEncoded);
//STOP DOING YOUR CLEANING
sws_freeContext(img_convert_ctx);
SDL_Quit();
av_free(out_buffer);
av_free(pFrameYUV);
avcodec_close(pCodecCtx);
avformat_close_input(&pFormatCtx);
return 0;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost,AVPacket * pkt11)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
c = ost->st->codec;
//DO NOT NEED THIS FRAME.
//frame = get_video_frame(ost);
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
//IGNORE THIS FOR A MOMENT
/* a hack to avoid data copy with some raw video muxers */
AVPacket pkt;
av_init_packet(&pkt);
if (!frame)
return 1;
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = ost->st->index;
pkt.data = (uint8_t *)frame;
pkt.size = sizeof(AVPicture);
pkt.pts = pkt.dts = frame->pts;
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = write_frame(oc, &c->time_base, ost->st, pkt11);
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n");
exit(1);
}
return 1;
} -
Merge commit ’e8c4db0d4d07738fed716b1d2f20c85aac944641’
30 avril 2015, par Michael Niedermayer