
Recherche avancée
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (14416)
-
ffmpeg/libavcodec memory management
23 juillet 2015, par Jason CThe libavcodec documentation is not very specific about when to free allocated data and how to free it. After reading through documentation and examples, I’ve put together the sample program below. There are some specific questions inlined in the source but my general question is, am I freeing all memory properly in the code below ? I realize the program below doesn’t do any cleanup after errors — the focus is on final cleanup.
The testfile() function is the one in question.
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
}
#include <cstdio>
using namespace std;
void AVFAIL (int code, const char *what) {
char msg[500];
av_strerror(code, msg, sizeof(msg));
fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
exit(2);
}
#define AVCHECK(f) do { int e = (f); if (e < 0) AVFAIL(e, #f); } while (0)
#define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)
void testfile (const char *filename) {
AVFormatContext *format;
unsigned streamIndex;
AVStream *stream = NULL;
AVCodec *codec;
SwsContext *sws;
AVPacket packet;
AVFrame *rawframe;
AVFrame *rgbframe;
unsigned char *rgbdata;
av_register_all();
// load file header
AVCHECK(av_open_input_file(&format, filename, NULL, 0, NULL));
AVCHECK(av_find_stream_info(format));
// find video stream
for (streamIndex = 0; streamIndex < format->nb_streams && !stream; ++ streamIndex)
if (format->streams[streamIndex]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
stream = format->streams[streamIndex];
if (!stream) {
fprintf(stderr, "no video stream\n");
exit(2);
}
// initialize codec
AVCHECKPTR(codec, avcodec_find_decoder(stream->codec->codec_id));
AVCHECK(avcodec_open(stream->codec, codec));
int width = stream->codec->width;
int height = stream->codec->height;
// initialize frame buffers
int rgbbytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
AVCHECKPTR(rawframe, avcodec_alloc_frame());
AVCHECKPTR(rgbframe, avcodec_alloc_frame());
AVCHECKPTR(rgbdata, (unsigned char *)av_mallocz(rgbbytes));
AVCHECK(avpicture_fill((AVPicture *)rgbframe, rgbdata, PIX_FMT_RGB24, width, height));
// initialize sws (for conversion to rgb24)
AVCHECKPTR(sws, sws_getContext(width, height, stream->codec->pix_fmt, width, height, PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL));
// read all frames fromfile
while (av_read_frame(format, &packet) >= 0) {
int frameok = 0;
if (packet.stream_index == (int)streamIndex)
AVCHECK(avcodec_decode_video2(stream->codec, rawframe, &frameok, &packet));
av_free_packet(&packet); // Q: is this necessary or will next av_read_frame take care of it?
if (frameok) {
sws_scale(sws, rawframe->data, rawframe->linesize, 0, height, rgbframe->data, rgbframe->linesize);
// would process rgbframe here
}
// Q: is there anything i need to free here?
}
// CLEANUP: Q: am i missing anything / doing anything unnecessary?
av_free(sws); // Q: is av_free all i need here?
av_free_packet(&packet); // Q: is this necessary (av_read_frame has returned < 0)?
av_free(rgbframe);
av_free(rgbdata);
av_free(rawframe); // Q: i can just do this once at end, instead of in loop above, right?
avcodec_close(stream->codec); // Q: do i need av_free(codec)?
av_close_input_file(format); // Q: do i need av_free(format)?
}
int main (int argc, char **argv) {
if (argc != 2) {
fprintf(stderr, "usage: %s filename\n", argv[0]);
return 1;
}
testfile(argv[1]);
}
</cstdio>Specific questions :
- Is there anything I need to free in the frame processing loop ; or will libav take care of memory management there for me ?
- Is
av_free
the correct way to free anSwsContext
? - The frame loop exits when
av_read_frame
returns < 0. In that case, do I still need toav_free_packet
when it’s done ? - Do I need to call
av_free_packet
every time through the loop or willav_read_frame
free/reuse the oldAVPacket
automatically ? - I can just
av_free
theAVFrame
s at the end of the loop instead of reallocating them each time through, correct ? It seems to be working fine, but I’d like to confirm that it’s working because it’s supposed to, rather than by luck. - Do I need to
av_free(codec)
theAVCodec
or do anything else afteravcodec_close
on theAVCodecContext
? - Do I need to
av_free(format)
theAVFormatContext
or do anything else afterav_close_input_file
?
I also realize that some of these functions are deprecated in current versions of libav. For reasons that are not relevant here, I have to use them.
-
Using OpenMAX (IL ?) for audio/video decoding on Android
14 septembre 2012, par Christopher CorsiMany of the newer hardware platforms running Android, in particular NVIDIA's Tegra 2, support OpenMAX for media acceleration. It's effectively impossible on today's devices to decode 720p video without this support, but the number of demuxers supported on Android are quite slim. The only public API I've been able to find has been through the MediaPlayer class in the Android SDK. There are multiple places in the Android source tree with OpenMAX related tidbits, however.
On my device (Samsung Galaxy Tab 10.1) I've got access to hardware decoders through a multitude of OpenMAX libs in /system/lib, and it would be great to interface my video application with these. Can anyone point me to information on implementing a decoder powered by OpenMAX ? I've found the documentation from Khronos, but nothing in the way of example code or tutorials. I've already got demuxing and even software decoding taken care of (via libavcodec/libavformat), I'd just like to put hooks in to enable hardware encoding. I'm also assuming here it would be necessary to link directly to the ones available on the device, which makes it pretty lackluster in terms of portability, but it works.
Alternatively, I'm interested in anything anyone knows about private APIs for accessing the video decoding available on Tegra 2 devices. Especially if there's a vdpau interface like what NVIDIA implements for desktop linux distributions, since there's plenty available for that - but I wasn't able to find shared libraries that indicate that support.
-
ffmpeg split videos from times in csv file
10 juin 2018, par martinsI’m using Python 3.6. I am trying to split videos into subclips of specified time. I have a folder with 250 ".mp4" files and a separate csv with the times at which I want each video to be subclipped. For instance, a first .mp4 file would be "firstvideo.mp4" up to "twohundredfiftyvideo.mp4". Separately, I have a csv file with each video file name in column A and the timing at which each video needs to be split (column B to I). All videos need to be split into 4 subclips. The .csv looks like :
col A col B col C colD colE .. colI
row1 firstvideo.mp4 00:00:10 - 00:00:20 - 00:01:15 - 00:02:04 .. 00:07:15
row2 secondvideo.mp4 00:00:15 - 00:00:34 - 00:01:05 - 00:01:55 .. 00:08:23"firstvideo.mp4" needs a first split from second 10 to 20, a second split from 1m15s - 2m04s and so on (colI is the time at which the fourth subclip should stop). The process should iterate for each of the 250 rows of the csv file corresponding to the 250 mp4 files in a folder.
So far, the only thing I know is to use ffmpeg to split videos into 4 subclips and generate 4 different output files but do not know how to read from the csv line by line... This is the code I have so far.
ffmpeg -i firstvideo.mp4 -vcodec copy -acodec copy -ss 00:00:10 -t 00:00:20
firstvideo_1.mp4 -vcodec copy -acodec copy -ss 00:01:15 -t 00:02:04 firstvideo_2.mp4 -vcodec copy -acodec copy -ss 00:03:48 -t 00:04:23
firstvideo_3.mp4 -vcodec copy -acodec copy -ss 00:05:30 -t 00:07:15 firstvideo_4.mp4I name each output file with a _1,_2,_3 or _4 appended to the original video file name. Ideally, I would generate 4 subclips per video (i.e. 250 videos x 4subclip/video = 1,000 mp4 files) and then concatenate each video 4 subclips into one file (i.e. 250 additional files). In fact, I don’t care about the 4 subclips, I’d delete them after they generated my concatenated file.
Thanks for your time anyways,