
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (84)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (9001)
-
Video conversion with ffmpeg to target Android and iOS mobile devices
17 novembre 2017, par Lee BrindleyI’m building a react native app for both Android and IOS, the back-end API is written with NodeJS.
Users may upload video from their phones, once uploaded the user and their friends will be able to view the video - so the videos need to be stored in a format which is playable on both Android & IOS.
My question relates to the conversion of video, uploaded by the user. I developed a similar app a couple of years ago ; I used the repo node-fluent-ffmpeg which provides a nice API to interact with FFmpeg.
In the previous project (which was a web app), I converted the uploaded videos into two files, one .mp4 and one .webm - if a user uploaded an mp4, then I would skip the mp4 step, likewise if they uploaded a .webm.
This was kind of slow. Now I’ve come across the same requirement years later, after some research I think I was wrong to convert the videos to the last project.
I’ve read that I can simply use FFmpeg to change the container format of the videos, which is a much faster process than converting them from scratch.
The video conversion code I used last time went something along the lines of :
var convertVideo = function (source, format, output, success, failure, progress) {
var converter = ffmpeg(source);
var audioCodec = "libvorbis";
if (format.indexOf("mp4") != -1) {
audioCodec = "aac";
}
converter.format(format)
.withVideoBitrate(1024)
.withAudioCodec(audioCodec)
.on('end', success)
.on('progress', progress)
.on('error', failure);
converter.save(output);
};Usage :
Convert to mp4 :
convertVideo("PATH_TO_VIDEO", "mp4", "foo.mp4", () => {console.log("success");});
Convert to webm :
convertVideo("PATH_TO_VIDEO", "webm", "foo.webm", () => {console.log("success");});
Can anyone point out a code smell here regarding the performance of this operation ? Is this code doing a lot more than it should achieve cross-platform compatibility between IOS and Android ?
Might be worth mentioning that support for older OS versions is not such a big deal in this project.
-
Video Conversion ? with ffmpeg
10 août 2017, par Lee BrindleyI’m building a react native app for both Android and IOS, the back-end API is written with NodeJS.
Users may upload video from their phones, once uploaded the user and their friends will be able to view the video - so the videos need to be stored in a format which is playable on both Android & IOS.
My question relates to the conversion of video, uploaded by the user. I have developed a similar app a couple of years ago, I used the repo node-fluent-ffmpeg which provides a nice API to interact with ffmpeg.
In the previous project (which was a web app), I converted the uploaded videos into two files, one .mp4 and one .webm - if a user uploaded a mp4, then I would skip the mp4 step, likewise if they uploaded a .webm.
This was kind of slow. Now i’ve come across the same requirement years later, after some research I think I was wrong to convert the videos in the last project.
I’ve read that I can simply use ffmpeg to change the container format of the videos, which is a much faster process than converting them from scratch.
The video conversion code I used last time went something along the lines of :
var convertVideo = function (source, format, output, success, failure, progress) {
var converter = ffmpeg(source);
var audioCodec = "libvorbis";
if (format.indexOf("mp4") != -1) {
audioCodec = "aac";
}
converter.format(format)
.withVideoBitrate(1024)
.withAudioCodec(audioCodec)
.on('end', success)
.on('progress', progress)
.on('error', failure);
converter.save(output);
};Usage :
Convert to mp4 :
convertVideo("PATH_TO_VIDEO", "mp4", "foo.mp4", () => {console.log("success");});
Convert to webm :
convertVideo("PATH_TO_VIDEO", "webm", "foo.webm", () => {console.log("success");});
Can anyone point out a code smell here regarding performance of this operation ? Is this code doing a lot more than it should to achieve cross platform compatibility between IOS and Android ?
Might be worth mentioning that support for older OS versions is not such a big deal in this project.
-
Writing opencv frames to avi container using libavformat Custom IO
16 juillet 2017, par AryanI have to write OpenCV cv::Mat frames to an AVI container. I cannot use OpenCV’s VideoWriter because I do not intend to write the AVI file to disk directly, instead I want to send it to a custom stream, so I have to use ffmpeg/libav. As I have never used ffmpeg before I have taken help from solutions provided here and here, alongwith the ffmpeg documentation.
I am able to send AVI container packets to my custom output stream as required but the performance is very bad. Specifically, the call to avcodec_encode_video2 is taking too long.
First, I suspect that due to my inexperience I have misconfigured or wrongly coded something. I am currently working wit 640*480 grayscale frames. On my i.MX6 platform the call to avcodec_encode_video2 is taking about 130ms on average per frame, which is unacceptably slow. Any pointers to an obvious performance killer ??? (i know sws_scale looks useless but it takes negligible time, and might be useful for me later).
Second, I am using PNG encoder, but that is not required, I would be happy to write uncompressed data if I know how to : If the slowdown is not due to my bad programming, can we just get rid of the encoder and generate uncompressed packets for the avi container ? Or use some encoder that accepts grayscale images and is not that slow ?
For Initialization and writing of header I am using :
void MyWriter::WriteHeader()
{
av_register_all();
// allocate output format context
if (avformat_alloc_output_context2(&avFmtCtx, NULL, "avi", NULL) < 0) { printf("DATAREC: avformat_alloc_output_context2 failed"); exit(1); }
// buffer for avio context
bufSize = 640 * 480 * 4; // Don't know how to derive, but this should be big enough for now
buffer = (unsigned char*)av_malloc(bufSize);
if (!buffer) { printf("DATAREC: Buffer alloc failed"); exit(1); }
// allocate avio context
avIoCtx = avio_alloc_context(buffer, bufSize, 1, this, NULL, WriteCallbackWrapper, NULL);
if (!avIoCtx) { printf("DATAREC: avio_alloc_context failed"); exit(1); }
// connect avio context to format context
avFmtCtx->pb = avIoCtx;
// set custom IO flag
avFmtCtx->flags |= AVFMT_FLAG_CUSTOM_IO;
// get encoder
encoder = avcodec_find_encoder(AV_CODEC_ID_PNG);
if (!encoder) { printf("DATAREC: avcodec_find_encoder failed"); exit(1); }
// create new stream
avStream = avformat_new_stream(avFmtCtx, encoder);
if (!avStream) { printf("DATAREC: avformat_new_stream failed"); exit(1); }
// set stream codec defaults
if (avcodec_get_context_defaults3(avStream->codec, encoder) < 0) { printf("DATAREC: avcodec_get_context_defaults3 failed"); exit(1); }
// hardcode settings for now
avStream->codec->pix_fmt = AV_PIX_FMT_GRAY8;
avStream->codec->width = 640;
avStream->codec->height = 480;
avStream->codec->time_base.den = 15;
avStream->codec->time_base.num = 1;
avStream->time_base.den = avStream->codec->time_base.den;
avStream->time_base.num = avStream->codec->time_base.num;
avStream->r_frame_rate.num = avStream->codec->time_base.den;
avStream->r_frame_rate.den = avStream->codec->time_base.num;
// open encoder
if (avcodec_open2(avStream->codec, encoder, NULL) < 0) {
printf("DATAREC: Cannot open codec\n");
exit(1);
}
// write header
if(avformat_write_header(avFmtCtx, NULL) < 0) { printf("DATAREC: avformat_write_header failed\n"); exit(1);}
// prepare for first frame
framePts = 0;
firstFrame = true;
}After writing the header, the following function is called in a loop for each cv::Mat frame :
void MyWriter::WriteFrame(cv::Mat& item)
{
if (firstFrame) // do only once, before writing the first frame
{
// allocate frame
frame = av_frame_alloc();
if (!frame) { printf("DATAREC: av_frame_alloc failed"); exit(1); }
// get size for framebuffer
int picsz = av_image_get_buffer_size(avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, 1);
// allocate frame buffer
framebuf = (unsigned char*)av_malloc(picsz);
if (!framebuf) { printf("DATAREC: fail to alloc framebuf"); exit(1); }
// set frame width, height, format
frame->width = avStream->codec->width;
frame->height = avStream->codec->height;
frame->format = static_cast<int>(avStream->codec->pix_fmt);
// set up data pointers and linesizes
if (av_image_fill_arrays(frame->data, frame->linesize, framebuf, avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, 1) < 0) { printf("DATAREC: av_image_fill_arrays failed\n"); exit(1);}
// get sws context
swsctx = sws_getCachedContext(nullptr, avStream->codec->width, avStream->codec->height, avStream->codec->pix_fmt, avStream->codec->width, avStream->codec->height, avStream->codec->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
if (!swsctx) { printf("DATAREC: fail to sws_getCachedContext"); exit(1); }
// done initializing
firstFrame = false; // don't repeat this for the following frames
}
// call sws scale
const int stride[] = { static_cast<int>(item.step[0]) };
sws_scale(swsctx, &item.data, stride, 0, item.rows, frame->data, frame->linesize);
// set presentation timestamp
frame->pts = framePts++;
// initialize packet
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
// THIS TAKES VERY LONG TO EXECUTE
// call encoder, convert frame to packet
if (avcodec_encode_video2(avStream->codec, &pkt, frame, &got_pkt) < 0) { printf("DATAREC: fail to avcodec_encode_video2"); exit(1); }
write packet if available
if (got_pkt)
{
pkt.duration = 1;
av_write_frame(avFmtCtx, &pkt);
}
// wipe packet
av_packet_unref(&pkt);
}
</int></int>After writing required frames, trailer is written :
void MyWriter::WriteTrailer()
{
// prepare packet for trailer
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
// encode trailer packet
if (avcodec_encode_video2(avStream->codec, &pkt, nullptr, &got_pkt) < 0) { printf("DATAREC: fail to avcodec_encode_video2"); exit(1); }
// write trailer packet
if (got_pkt)
{
pkt.duration = 1;
av_write_frame(avFmtCtx, &pkt);
}
// free everything
av_packet_unref(&pkt);
av_write_trailer(avFmtCtx);
av_frame_free(&frame);
avcodec_close(avStream->codec);
av_free(avIoCtx);
sws_freeContext(swsctx);
avformat_free_context(avFmtCtx);
av_free(framebuf);
av_free(buffer);
firstFrame = true; // for the next file
}Many many thanks to everyone who made it down to this line !