
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (40)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5022)
-
Grab partial .mp4 video from onlline source
28 juillet 2016, par FlemingjpI want to be able to grab a partial mp4 file from an online source so I can produce quick thumbnails.
My idea was to pull a patial mp4 file from a http request using the header
Range: 'bytes=1000-'
However when I parse the data it doesn’t have the mp4 container therefore is corrupt. In Firefox it handles this easily as seeking to an unloaded part of the video is very quick.
How can I pull partial mp4 clips ? For example from here I could get a clip that is 0:28-0:30
-
ffmpeg Could not find input stream [closed]
17 avril 2013, par Peter WalkerI've seen lot's of post on streaming video feeds to ffserver but still haven't found what I wanted.
I use not only on the mp4 file but avi and other formats as well.
ffmpeg -i test5.mp4 http://localhost:8090/feed1.ffm
I keep finding this error.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test5.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2mp41
encoder : Lavf53.21.1
Duration: 00:05:00.00, start: 0.000000, bitrate: 278 kb/s
Stream #0.0(und): Video: mpeg4 (Simple Profile), yuv420p, 640x320 [PAR 1:1 DAR 2:1], 277 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Incompatible sample format '(null)' for codec 'mp2', auto-selecting format 's16'
Last message repeated 1 times
Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
Stream #0.0: Video: [0][0][0][0] / 0x0000, yuv420p, q=0-0, 1000k tbn
Stream #0.1: Video: mpeg1video, (null), 32622x226393720, q=2-31, pass 1, pass 2, 226393 kb/s, 1000k tbn
Stream #0.2: Audio: mp2, 22050 Hz, 1 channels, s16, 226391 kb/s
Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 226391 kb/s, 1000k tbn, 15 tbc
Could not find input stream matching output stream #0.2
*** glibc detected *** ffmpeg: free(): invalid pointer: 0x0000000000a53d00 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7e626)[0x7f6e0d4af626]
/usr/lib/x86_64-linux-gnu/libavformat.so.53(avformat_free_context+0xb0) [0x7f6e0f2c9c70]
ffmpeg[0x4089bf]
ffmpeg[0x40a52f]
ffmpeg[0x407a04]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f6e0d45276d]The config file I used for the ffserver is
Port 8090
RTSPPort 7654
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 500000
CustomLog -
NoDaemon
<feed>
File /tmp/feed1.ffm
FileMaxSize 5M
# NoAudio
</feed>
<stream>
Feed feed1.ffm
Format rtp
# NoAudio
VideoFrameRate 30
</stream>Can anyone help me figure out some something or something to look for ?
I tried googling but almost everyone uses video feeds instead or it just works well for them.
Any clue or direction/suggestion would help. Thank you so much in advance. -
Encoding a screenshot into a video using FFMPEG
2 juillet 2013, par mohMI'm trying to get the pixels from the screen, and encode the screenshot into a video using ffmpeg. I've seen a couple of examples but they either assume you already have the pixel data, or use image file input. It seems like whether I use sws_scale() or not (which is included in the examples I've seen), or whether I'm typecasting a HBITMAP or RGBQUAD* it's telling me that the image src data is bad and is encoding a blank image rather than the screenshot. Is there something I'm missing here ?
AVCodec* codec;
AVCodecContext* c = NULL;
AVFrame* inpic;
uint8_t* outbuf, *picture_buf;
int i, out_size, size, outbuf_size;
HBITMAP hBmp;
//int x,y;
avcodec_register_all();
printf("Video encoding\n");
// Find the mpeg1 video encoder
codec = avcodec_find_encoder(CODEC_ID_H264);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
else printf("H264 codec found\n");
c = avcodec_alloc_context3(codec);
inpic = avcodec_alloc_frame();
c->bit_rate = 400000;
c->width = screenWidth; // resolution must be a multiple of two
c->height = screenHeight;
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 10; // emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c->codec_id = CODEC_ID_H264;
//c->codec_type = AVMEDIA_TYPE_VIDEO;
//av_opt_set(c->priv_data, "preset", "slow", 0);
//printf("Setting presets to slow for performance\n");
// Open the encoder
if (avcodec_open2(c, codec,NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
else printf("H264 codec opened\n");
outbuf_size = 100000 + 12*c->width*c->height; // alloc image and output buffer
//outbuf_size = 100000;
outbuf = static_cast(malloc(outbuf_size));
size = c->width * c->height;
picture_buf = static_cast(malloc((size*3)/2));
printf("Setting buffer size to: %d\n",outbuf_size);
FILE* f = fopen("example.mpg","wb");
if(!f) printf("x - Cannot open video file for writing\n");
else printf("Opened video file for writing\n");
/*inpic->data[0] = picture_buf;
inpic->data[1] = inpic->data[0] + size;
inpic->data[2] = inpic->data[1] + size / 4;
inpic->linesize[0] = c->width;
inpic->linesize[1] = c->width / 2;
inpic->linesize[2] = c->width / 2;*/
//int x,y;
// encode 1 second of video
for(i=0;itime_base.den;i++) {
fflush(stdout);
HWND hDesktopWnd = GetDesktopWindow();
HDC hDesktopDC = GetDC(hDesktopWnd);
HDC hCaptureDC = CreateCompatibleDC(hDesktopDC);
hBmp = CreateCompatibleBitmap(GetDC(0), screenWidth, screenHeight);
SelectObject(hCaptureDC, hBmp);
BitBlt(hCaptureDC, 0, 0, screenWidth, screenHeight, hDesktopDC, 0, 0, SRCCOPY|CAPTUREBLT);
BITMAPINFO bmi = {0};
bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
bmi.bmiHeader.biWidth = screenWidth;
bmi.bmiHeader.biHeight = screenHeight;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
RGBQUAD *pPixels = new RGBQUAD[screenWidth*screenHeight];
GetDIBits(hCaptureDC,hBmp,0,screenHeight,pPixels,&bmi,DIB_RGB_COLORS);
inpic->pts = (float) i * (1000.0/(float)(c->time_base.den))*90;
avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_BGR32, c->width, c->height); // Fill picture with image
av_image_alloc(inpic->data, inpic->linesize, c->width, c->height, c->pix_fmt, 1);
//printf("Allocated frame\n");
//SaveBMPFile(L"screenshot.bmp",hBmp,hDc,screenWidth,screenHeight);
ReleaseDC(hDesktopWnd,hDesktopDC);
DeleteDC(hCaptureDC);
DeleteObject(hBmp);
// encode the image
out_size = avcodec_encode_video(c, outbuf, outbuf_size, inpic);
printf("Encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
// get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("Writing frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
// add sequence end code to have a real mpeg file
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(inpic);
printf("Closed codec and Freed\n");