
Recherche avancée
Autres articles (79)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (16585)
-
ffmpeg record screen and save video file to disk as .mp4 or .mpg
5 janvier 2015, par musimbateI want to record the screen of my pc (using gdigrab on my windows machine) and store the saved video file on my disk as an mp4 or mpg file .I have found an example piece of code that grabs the screen and shows it in an SDL window here :http://xwk.iteye.com/blog/2125720 (The code is on the bottom of the page and has an english version) and the ffmpeg muxing example https://ffmpeg.org/doxygen/trunk/muxing_8c-source.html seems to be able to help encode audio and video into a desired output video file.
I have tried to combine these two by having a format context for grabbing the screen (AVFormatContext *pFormatCtx ; in my code ) and a separate format context to write the desired video file (AVFormatContext *outFormatContextEncoded ;).Within the loop to read packets from the input stream( screen grab stream) I directly encode write packets to the output file as shown in my code.I have kept the SDL code so I can see what I am recording.Below is my code with my modified write_video_frame() function .
The code builds OK but the output video can’t be played by vlc. When I run the command
ffmpeg -i filename.mpg
I get this output
[mpeg @ 003fed20] probed stream 0 failed
[mpeg @ 003fed20] Stream #0: not enough frames to estimate rate; consider increasing probesize
[mpeg @ 003fed20] Could not find codec parameters for stream 0 (Video: none): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
karamage.mpg: could not find codec parameters
Input #0, mpeg, from 'karamage.mpg':
Duration: 19:30:09.25, start: 37545.438756, bitrate: 2 kb/s
Stream #0:0[0x1e0]: Video: none, 90k tbr, 90k tbn
At least one output file must be specifiedAm I doing something wrong here ? I am new to ffmpeg and any guidance on this is highly appreciated.Thank you for your time.
int main(int argc, char* argv[])
{
AVFormatContext *pFormatCtx;
int i, videoindex;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
av_register_all();
avformat_network_init();
//Localy defined structure.
OutputStream outVideoStream = { 0 };
const char *filename;
AVOutputFormat *outFormatEncoded;
AVFormatContext *outFormatContextEncoded;
AVCodec *videoCodec;
filename="karamage.mpg";
int ret1;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
//ASSIGN STH TO THE FORMAT CONTEXT.
pFormatCtx = avformat_alloc_context();
//
//Use this when opening a local file.
//char filepath[]="src01_480x272_22.h265";
//avformat_open_input(&pFormatCtx,filepath,NULL,NULL)
//Register Device
avdevice_register_all();
//Use gdigrab
AVDictionary* options = NULL;
//Set some options
//grabbing frame rate
//av_dict_set(&options,"framerate","5",0);
//The distance from the left edge of the screen or desktop
//av_dict_set(&options,"offset_x","20",0);
//The distance from the top edge of the screen or desktop
//av_dict_set(&options,"offset_y","40",0);
//Video frame size. The default is to capture the full screen
//av_dict_set(&options,"video_size","640x480",0);
AVInputFormat *ifmt=av_find_input_format("gdigrab");
if(avformat_open_input(&pFormatCtx,"desktop",ifmt,&options)!=0){
printf("Couldn't open input stream.\n");
return -1;
}
if(avformat_find_stream_info(pFormatCtx,NULL)<0)
{
printf("Couldn't find stream information.\n");
return -1;
}
videoindex=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
{
videoindex=i;
break;
}
if(videoindex==-1)
{
printf("Didn't find a video stream.\n");
return -1;
}
pCodecCtx=pFormatCtx->streams[videoindex]->codec;
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
{
printf("Codec not found.\n");
return -1;
}
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)
{
printf("Could not open codec.\n");
return -1;
}
AVFrame *pFrame,*pFrameYUV;
pFrame=avcodec_alloc_frame();
pFrameYUV=avcodec_alloc_frame();
//PIX_FMT_YUV420P WHAT DOES THIS SAY ABOUT THE FORMAT??
uint8_t *out_buffer=(uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height));
avpicture_fill((AVPicture *)pFrameYUV, out_buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
//<<<<<<<<<<<-------PREP WORK TO WRITE ENCODED VIDEO FILES-----
avformat_alloc_output_context2(&outFormatContextEncoded, NULL, NULL, filename);
if (!outFormatContextEncoded) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&outFormatContextEncoded, NULL, "mpeg", filename);
}
if (!outFormatContextEncoded)
return 1;
outFormatEncoded=outFormatContextEncoded->oformat;
//THIS CREATES THE STREAMS(AUDIO AND VIDEO) ADDED TO OUR OUTPUT STREAM
if (outFormatEncoded->video_codec != AV_CODEC_ID_NONE) {
//YOUR VIDEO AND AUDIO PROPS ARE SET HERE.
add_stream(&outVideoStream, outFormatContextEncoded, &videoCodec, outFormatEncoded->video_codec);
have_video = 1;
encode_video = 1;
}
// Now that all the parameters are set, we can open the audio and
// video codecs and allocate the necessary encode buffers.
if (have_video)
open_video(outFormatContextEncoded, videoCodec, &outVideoStream, opt);
av_dump_format(outFormatContextEncoded, 0, filename, 1);
/* open the output file, if needed */
if (!(outFormatEncoded->flags & AVFMT_NOFILE)) {
ret1 = avio_open(&outFormatContextEncoded->pb, filename, AVIO_FLAG_WRITE);
if (ret1 < 0) {
//fprintf(stderr, "Could not open '%s': %s\n", filename,
// av_err2str(ret));
fprintf(stderr, "Could not open your dumb file.\n");
return 1;
}
}
/* Write the stream header, if any. */
ret1 = avformat_write_header(outFormatContextEncoded, &opt);
if (ret1 < 0) {
//fprintf(stderr, "Error occurred when opening output file: %s\n",
// av_err2str(ret));
fprintf(stderr, "Error occurred when opening output file\n");
return 1;
}
//<<<<<<<<<<<-------PREP WORK TO WRITE ENCODED VIDEO FILES-----
//SDL----------------------------
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
printf( "Could not initialize SDL - %s\n", SDL_GetError());
return -1;
}
int screen_w=640,screen_h=360;
const SDL_VideoInfo *vi = SDL_GetVideoInfo();
//Half of the Desktop's width and height.
screen_w = vi->current_w/2;
screen_h = vi->current_h/2;
SDL_Surface *screen;
screen = SDL_SetVideoMode(screen_w, screen_h, 0,0);
if(!screen) {
printf("SDL: could not set video mode - exiting:%s\n",SDL_GetError());
return -1;
}
SDL_Overlay *bmp;
bmp = SDL_CreateYUVOverlay(pCodecCtx->width, pCodecCtx->height,SDL_YV12_OVERLAY, screen);
SDL_Rect rect;
//SDL End------------------------
int ret, got_picture;
AVPacket *packet=(AVPacket *)av_malloc(sizeof(AVPacket));
//TRY TO INIT THE PACKET HERE
av_init_packet(packet);
//Output Information-----------------------------
printf("File Information---------------------\n");
av_dump_format(pFormatCtx,0,NULL,0);
printf("-------------------------------------------------\n");
struct SwsContext *img_convert_ctx;
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
//------------------------------
//
while(av_read_frame(pFormatCtx, packet)>=0)
{
if(packet->stream_index==videoindex)
{
//HERE WE DECODE THE PACKET INTO THE FRAME
ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
if(ret < 0)
{
printf("Decode Error.\n");
return -1;
}
if(got_picture)
{
//THIS IS WHERE WE DO STH WITH THE FRAME WE JUST GOT FROM THE STREAM
//FREE AREA--START
//IN HERE YOU CAN WORK WITH THE FRAME OF THE PACKET.
write_video_frame(outFormatContextEncoded, &outVideoStream,packet);
//FREE AREA--END
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);
SDL_LockYUVOverlay(bmp);
bmp->pixels[0]=pFrameYUV->data[0];
bmp->pixels[2]=pFrameYUV->data[1];
bmp->pixels[1]=pFrameYUV->data[2];
bmp->pitches[0]=pFrameYUV->linesize[0];
bmp->pitches[2]=pFrameYUV->linesize[1];
bmp->pitches[1]=pFrameYUV->linesize[2];
SDL_UnlockYUVOverlay(bmp);
rect.x = 0;
rect.y = 0;
rect.w = screen_w;
rect.h = screen_h;
SDL_DisplayYUVOverlay(bmp, &rect);
//Delay 40ms----WHY THIS DELAY????
SDL_Delay(40);
}
}
av_free_packet(packet);
}//THE LOOP TO PULL PACKETS FROM THE FORMAT CONTEXT ENDS HERE.
//AFTER THE WHILE LOOP WE DO SOME CLEANING
//av_read_pause(context);
av_write_trailer(outFormatContextEncoded);
close_stream(outFormatContextEncoded, &outVideoStream);
if (!(outFormatContextEncoded->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_close(outFormatContextEncoded->pb);
/* free the stream */
avformat_free_context(outFormatContextEncoded);
//STOP DOING YOUR CLEANING
sws_freeContext(img_convert_ctx);
SDL_Quit();
av_free(out_buffer);
av_free(pFrameYUV);
avcodec_close(pCodecCtx);
avformat_close_input(&pFormatCtx);
return 0;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost,AVPacket * pkt11)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
c = ost->st->codec;
//DO NOT NEED THIS FRAME.
//frame = get_video_frame(ost);
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
//IGNORE THIS FOR A MOMENT
/* a hack to avoid data copy with some raw video muxers */
AVPacket pkt;
av_init_packet(&pkt);
if (!frame)
return 1;
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = ost->st->index;
pkt.data = (uint8_t *)frame;
pkt.size = sizeof(AVPicture);
pkt.pts = pkt.dts = frame->pts;
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = write_frame(oc, &c->time_base, ost->st, pkt11);
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n");
exit(1);
}
return 1;
} -
Python Matplotlib Basemap animation with FFMpegwriter stops after 820 Frames ?
5 août 2013, par cpaulikIf I run the following code it just stops after 820 Frames. I tested this on both a Ubuntu 12.04 VM and on Linux Mint 15. Unfortunately there is no error message. The program just hangs after printing 2012-06-02T16:54:00
import os, sys
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import matplotlib.animation as animation
from datetime import datetime,timedelta
def animation_test(start,end,fps=10,save_path='/home/username/animation_test/',\
save_name="test.mp4",dpi=80):
step = timedelta(minutes = 3)
current = start
dates =[]
frame = 0
while current <=end:
dates.append(current)
current += step
fig = plt.figure(figsize=(16,9),facecolor='k',edgecolor='k')
ax = fig.add_subplot(111)
metadata = dict(title='Movie Test', artist='Matplotlib',
comment='Movie support!')
writer = animation.FFMpegWriter(fps=fps, metadata=metadata,bitrate=20000)
direction = -0.5
lat_current = 0
lon_current = 0
with writer.saving(fig,os.path.join(save_path,save_name),dpi):
for current in dates:
ax.cla()
if direction > 0 and lat_current > 40 or \
direction < 0 and lat_current < -40:
direction = - direction
lat_current = lat_current + direction
lon_current = lon_current - 0.75
if lon_current < -180 :
lon_current += 360
basem = Basemap(projection='ortho', lat_0=lat_current, lon_0=lon_current, resolution='l',ax=ax)
basem.drawcoastlines()
#plt.show()
plt.savefig(os.path.join(save_path, 'frame%d.png'%frame),
dpi=dpi,facecolor='w',edgecolor='k')
writer.grab_frame()
frame += 1
print current.isoformat()
start = datetime.now()
animation_test(datetime(2012,6,1,0,0,0),datetime(2012,6,4,0,0,0),fps=10,dpi=80)
print datetime.now() - startTo explain the code a little bit :
I want to make an animation of satellite data which comes in small 3 minute files and show it on a rotating globe. This is why I chose to make the loop in the following example code step through the animation in 3 minute steps. I just removed the reading and plotting of the satellite data in order to make the code executable by anyone.When I removed basemap from the program and just plotted a scatterplot of random data the program ran all the way through.
I'm not sure but I don't think that it is a memory issue since my RAM is only utilized approx. 20% while the program is running.
Thanks for any help in getting to the bottom of this.
-
Tools/Techniques for investigating video corruption — ffmpeg / libavcodec
17 juillet 2013, par GopherkhanIn my current work I'm trying to encode some images to h264 video using the FFMPEG's C library. The resulting video plays fine in VLC, but has no preview image. The video can play in VLC and Mplayer on ubuntu, but won't play on Mac or PC (in fact, it causes a "VTDecoderXPCService quit unexpectedly" error on Mac).
If I run the resulting file through FFMPEG using the command line, the resulting file has a preview image, and plays correctly everywhere.
Apparently the file that I get out of the program is corrupt in some weird place, but I don't have any output during my compilation or run to indicate where. I can't share my code at the moment (work code isn't open source yet :-( ), but I have tried a number of things :
- Writing only header and trailer data (av_write_trailer) and no frames
- writing frames only minus the trailer (using avcodec_encode_video2 and av_write_frame)
- Adjusting our time_base and frame pts values to encode only one frame per second
- Removing all variable frame rate code
- Numerous other variants that I won't bother you with here
In creating my project, I've also followed the following tutorials :
And consulted the deprecated ffmpeg functions list
And compiled FFMPEG on ubuntu according to the official doc
And consulted numerous StackOverflow questions :
- Raw H264 frames in mpegts container using libavcodec
- How to encode Bitmaps into a video using MediaCodec ?
- How to convert RGB from YUV420p for ffmpeg encoder ?
- Encoding H.264 video using FFmpeg C API
- ffmpeg : how to save h264 raw data as mp4 file
But every run of the program runs into the exact same problem.
My question is, is there anything obvious that causes a programmatic run of FFMpeg to differ from a console run (e.g., an incomplete finalization, some threading issues, etc.) ? Like some obvious reason that a console run could repair a corrupted file ? Or is there a decent tool/method for inspecting a video file and finding the point of corruption ?