
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (100)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (13308)
-
Why motion vector extracted from b frames are unchange ?
19 août 2019, par 霍宇琦I am writing c codes to extract motion vectors from b frame in MPEG4(Part2) compressed video format. But some motion vector seems wrong.
I use a raw video clips, using the extract_mvs.c from ffmpeg 4.2. For example if the frame seq is IPBPPBPP.... i can get all the side data for all frame. But when inspecting the mv->src_x, mv->src_y, mv->dst_x, mv->dst_y i find that all the srcs are equal to dsts for some individual frames, there must be sth wrong in it, but i change little from the official code.
//modify from ffmpeg/doc/example/extract_mvs.c:
while(getting frames one by one) {
AVFrameSideData *sd;
video_frame_count++;
//printf("%d", video_frame_count);
if(video_frame_count < 19){
if (frame->pict_type == AV_PICTURE_TYPE_I ) printf("\nI");
if (frame->pict_type == AV_PICTURE_TYPE_B ) printf("B");
if (frame->pict_type == AV_PICTURE_TYPE_P ) printf("P");
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
if (mv->dst_x - mv->src_x != 0 || mv->dst_y - mv->src_y != 0) {
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
break;
}
}
}
av_frame_unref(frame);
}
}Outputs are :
Input #0, avi, from 'origin.avi':
Duration: 00:00:13.63, start: 0.000000, bitrate: 492 kb/s
Stream #0:0: Video: mpeg4 (DX50 / 0x30355844), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 486 kb/s, 30 fps, 30 tbr, 30 tbn, 30k tbc
Metadata:
title : H:\HumanActionDB\MotionClips\hmdb51_30fps_wBrd_10off_divx\brush_??
framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags
[mpeg4 @ 0x55739dc29580] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
IP2,-1,16,16, 137, 24, 136, 24,0x0
BP4,-1,16,16, 152, 57, 152, 56,0x0
P5,-1,16,16, 56, 155, 56, 152,0x0
BP7,-1,16,16, 151, 40, 152, 40,0x0
PB9,-1,16,16, 151, 40, 152, 40,0x0
P10,-1,16,16, 152, 39, 152, 40,0x0
P11,-1,16,16, 26, 55, 24, 56,0x0
B12,-1,16,16, 152, 39, 152, 40,0x0
P13,-1,16,16, 152, 39, 152, 40,0x0
P14,-1,16,16, 41, 168, 40, 168,0x0
B15,-1,16,16, 152, 39, 152, 40,0x0
P16,-1,16,16, 153, 39, 152, 40,0x0
PB18,-1,16,16, 168, 55, 168, 56,0x0you can see that the third frame(B) and the sixth, eighth frame(B, P) and the 17th frame (P) can be read, and the data can be extract from them, but
mv->dst_x == mv->src_x && mv->dst_y - mv->src_y
Can someone help me ? Thanks.
-
FFMPEG Generate video clips and concatenate them all
8 février 2020, par lms702I have a bunch of images (.png) and audio files (.wav) that I want to combine and concatenate. For example, if I have a1.wav, i1.png, a2.wav, and i2.png, I want to output a video that is a1.wav overlayed onto i1.png, then (concatenated to a2.wav overlayed onto i2.png.
Currently, my approach is to save each individual clip and then concatenate them all at the end.
To save each clip, I use this command (in a loop for all of my clips) :
ffmpeg -i {imageFile} -i {audioFile} -nostdin -qscale:v 1 -vcodec libx264 -pix_fmt yuv420p {outputFile.mp4}``
It outputs an mp4 that kind of works - the playback is really buggy but works in full screen.
My current approach to concatenating has not been at all successful. I create a list of the clip names and put it into filename then call this command :
ffmpeg -f concat -i {filename} -c copy clips/finalOutput.mp4
This outputs a pretty jumbled video and this error repeated :
[mp4 @ 000001cdec110740] Non-monotonous DTS in output stream 0:1; previous: 1045041, current: 604571; changing to 1045042. This may result in incorrect timestamps in the output file.
So, a few questions.
What is the best way to go about this process ?
Should I be saving each clip or is there a better way to do it all in one command ?
If I do save each clip, is there a better file format I should use ?
Plz help with the concatenation command.
Also note that because I am automating this with python I can build arbitrarily large commands, though that might not be ideal.
I am very new to this and I would really appreciate any help !
-
vp9 : split superframes in the filtering stage before actual decoding
13 novembre 2016, par Anton Khirnov