
Recherche avancée
Autres articles (27)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4612)
-
Turn image sequence into video with transparency
29 janvier 2014, par Cody HatchI've got what seems like it should be a really simple problem, but it's proving much harder than I expected. Here's the issue :
I've got a fairly large image sequence consisting of numbered frames (output from Maya, for what its worth). The images are currently in Targa (.tga) format, but I could convert them to PNGs or other arbitrary format if that matters. The important thing is, they've got an alpha channel.
What I want to do is programatically turn them into a video clip. The format doesn't really matter, but it needs to be lossless and have an alpha channel. Uncompressed video in a Quicktime container would probably be ideal.
My initial thought was ffmpeg, but after wasting most of a day on it it seems it's got no support at all for alpha channels. Either I'm missing something, or the underlying libavcodec just doesn't do it.
So, what's the right way here ? A command line tool like ffmpeg would be nice, but any solution that runs on Windows and could be called from a script would be fine.
Note : Having an alpha chanel in your video isn't actually all that uncommon, and it's really useful if you want to composite it on top of another video clip or a still image. As far as I know uncompressed video, the Quicktime Animation codec, and the Sorenson Video 3 codec all support tranparency, and I've heard H.264 does as well. All we're really talking about is 32-bit color depth, and that's pretty widely supported ; both Quicktime .mov files and Windowss .avi files can handle it, and probably a lot more too.
Quicktime Pro is more than happy to turn an image sequence into a 32-bit .mov file. Hit export, change color depth to "Millions of Colors+", select the Animation codec, crank the quality up to 100, and there you are - losslessly compressed video, with an alpha chanel, and it'll play back almost anywhere since the codec has been part of Quicktime since version 1.0. The problem is, Quicktime Pro doesn't have any sort of command-line interface (at least on Windows). ffmpeg supports encoding using the Quicktime Animation codec (which it calls qtrle), but it only supports a bit-depth of 24 bits.
The issue isn't finding a video format that supports an alpha channel. Quicktime Animation would be ideal, but even uncompressed video should work. The problem is finding a tool that supports it.
-
How to render video and audio
25 octobre 2011, par pic11I am trying to implement my own media player. What is the best way to render video and audio ? At this point I am thinking to use SurfaceView and AudioTrack classes, but not sure if it is the best option. I am interested in SDK and NDK solutions.
File output on regular desktop is non-blocking, that is OS takes care of buffering and actual disk writes are asynchronous to the thread that initiates the output. Does the same principle apply to video and audio output ? If not, I would need to run a separate thread to handle output asynchronously from decoding/demuxing.
What free software decoders are available for android ? I am thinking to use ffmpeg. Can relatively recent (say, top 30% in terms of CPU power) tablet handle 1,280×720 and 1,920×1,080 formats in software mode ?
-
FFmpeg sample code for creating a video file from still images JNI Android
21 juin 2012, par anishHow i modify the following FFMPEG sample code for creating a video file from still images that i am having in my android phone. I am using JNI for invoking ffmpeg.
JNIEXPORT void JNICALL videoEncodeExample((JNIEnv *pEnv, jobject pObj, jstring filename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, x, y, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf, *picture_buf;
printf("Video encoding\n");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
picture_buf = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buf;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = c->width;
picture->linesize[1] = c->width / 2;
picture->linesize[2] = c->width / 2;
/* encode 1 second of video */
for(i=0;i<25;i++) {
fflush(stdout);
/* prepare a dummy image */
/* Y */
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");
}Thanks and Regards
Anish