
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (6)
-
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (3589)
-
What is the output of a rawvideo in rgb24 pixel format in a mpegts container ?
9 juin 2016, par MattI’m trying to read bitmap info in a video stream, however, the data is not as I expect. Here is the ffmpeg command I’m using to generate the video (it’s basically a screen cap)
ffmpeg -video_size 1920x1080 -framerate 20 -f x11grab -i :0.0 -c:v rawvideo -f mpegts -pixel_format rgb24 capture.raw
Here is a snippet of the data :
byte hex binary
00 47 01000111
01 40 01000000
02 11 00010001
03 10 00010000
04 ff 11111111
05 7f 01111111
06 00 00000000
07 00 00000000
08 58 01011000
09 7f 01111111
10 7a 01111010
11 02 00000010
12 00 00000000
13 00 00000000
14 00 00000000
15 00 00000000
16 04 00000100
17 00 00000000
18 00 00000000
19 00 00000000The first 4 bytes are just as I expected (the mpegts header), but the payload is not. Is there some other packet spec I am missing or something else ?
-
SDL Audio - Plays only Static Noise
30 avril 2016, par bcpermafrostIm having an issue with playing audio.
Im new to the SDL World of things so im learning from a tutorial.
http://dranger.com/ffmpeg/tutorial03.html
As far as audio goes, i have exactly what he put down and didnt get the result he says I should get. In the end of the lesson he specifies that the audio should play normally. However all i get is excessively loud static noise. This leads me to believe that the packets arent being read correctly. However I have no idea how to debug or look for the issue.
Here is my main loop for parsing the packets :
while (av_read_frame(pFormatCtx, &packet) >= 0) {
if (packet.stream_index == videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished){
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
sws_scale(sws_ctx,
pFrame->data, pFrame->linesize,
0, pCodecCtx->height,
pict.data, pict.linesize);
//SDL_UnlockTexture(bmp);
SDL_UpdateYUVTexture(bmp, 0,
yPlane, pCodecCtx->width,
uPlane, uvPitch,
vPlane, uvPitch);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, bmp, NULL, NULL);
SDL_RenderPresent(renderer);
av_free_packet(&packet);
}
}
else if (packet.stream_index == audioStream) {
packet_queue_put(&audioq, &packet);
}
else
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
quit = 1;
SDL_DestroyTexture(bmp);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}this is my initialization of the audio device :
aCodecCtxOrig = pFormatCtx->streams[audioStream]->codec;
aCodec = avcodec_find_decoder(aCodecCtxOrig->codec_id);
if (!aCodec) {
fprintf(stderr, "Unsupported codec!\n");
return -1;
}
// Copy context
aCodecCtx = avcodec_alloc_context3(aCodec);
if (avcodec_copy_context(aCodecCtx, aCodecCtxOrig) != 0) {
fprintf(stderr, "Couldn't copy codec context");
return -1; // Error copying codec context
}
wanted_spec.freq = aCodecCtx->sample_rate;
wanted_spec.format = AUDIO_U16SYS;
wanted_spec.channels = aCodecCtx->channels;
wanted_spec.silence = 0;
wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE;
wanted_spec.callback = audio_callback;
wanted_spec.userdata = aCodecCtx;
if (SDL_OpenAudio( &wanted_spec, &spec) < 0) {
fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());
return -1;
}
avcodec_open2(aCodecCtx, aCodec, NULL);
// audio_st = pFormatCtx->streams[index]
packet_queue_init(&audioq);
SDL_PauseAudio(0);The Call back (same as the tutorial) :|
void audio_callback(void *userdata, Uint8 *stream, int len) {
AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;
int len1, audio_size;
static uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2];
static unsigned int audio_buf_size = 0;
static unsigned int audio_buf_index = 0;
while (len > 0) {
if (audio_buf_index >= audio_buf_size) {
/* We have already sent all our data; get more */
audio_size = audio_decode_frame(aCodecCtx, audio_buf, sizeof(audio_buf));
if (audio_size < 0) {
/* If error, output silence */
audio_buf_size = 1024; // arbitrary?
memset(audio_buf, 0, audio_buf_size);
}
else {
audio_buf_size = audio_size;
}
audio_buf_index = 0;
}
len1 = audio_buf_size - audio_buf_index;
if (len1 > len)
len1 = len;
memcpy(stream, (uint8_t *)audio_buf + audio_buf_index, len1);
len -= len1;
stream += len1;
audio_buf_index += len1;
}
} -
How do you extract closed caption format from HLS video
6 décembre 2016, par rynopI’m working on a Roku and TVOS app that is going to play HLS videos (VOD and live) as well as MP4. According to the Roku docs EIA-608 is supported on both and should also work on TVOS.
My question is, given a video URL how can I tell what format (EIA-608,WebVTT etc) of closed captioning is being used ?
I can use
ffprobe -hide_banner
to tell if program’s stream has closed captioning. Ex :Duration: 00:02:36.76, start: 0.100511, bitrate: 0 kb/s
Program 0
Metadata:
variant_bitrate : 380000
Stream #0:0: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 400x228 [SAR 1:1 DAR 100:57], Closed Captions, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
Metadata:
variant_bitrate : 380000
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, mono, fltp, 48 kb/s
Metadata:
variant_bitrate : 380000However, as you can see, Program 0 > Stream 0 just says that is has
Closed captions
- it does not list the type/spec of closed captioning technology being used.