
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (64)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (9005)
-
Script - split video file in equivalent segments
4 octobre 2014, par Code_Ed_StudentI am currently using ffmpeg to slice video files. I automated the process through a script called ffmpeg_split.sh. To view the full script click HERE.
When I ran this script on a few .mp4 files I had, it would return nothing :__DURATION_HMS=$(ffmpeg -i "$__FILE" 2>&1 | grep Duration | \
grep '\d\d:\d\d:\d\d.\d\d' -o)NOTE : This is line #54.
So without this value, the calls that come after it to the function
parse_duration_info()
are returning the error message.According to the comments in the original script, there should be 2 arguments to
parse_duration_info()
.# arg1: duration in format 01:23:45.67
# arg2: offset for group 1 gives hours, 2 gives minutes,
# 3 gives seconds, 4 gives millisecondsHere is the syntax to slice a video with script :
ffmpeg_split.sh -s test_vid.mp4 -o video-part%03d.mp4 -c 00:00:08
Belowe is where I parse the duration :
function parse_duration_info() {
if [[ $1 ]] && [[ $2 -gt 0 ]] && [[ $2 -lt 5 ]] ; then
__OFFSET=$2
__DURATION_PATTERN='\([0-9][0-9]\):\([0-9][0-9]\):\([0-9][0-9]\)\.\([0-9][0-9]\)'
echo "$1" | sed "s/$__DURATION_PATTERN/\\$__OFFSET/"
else
echo "Bad input to parse_duration_info()"
echo "Givven duration $1"
echo "Givven offset $2"
echo "Exiting..."
exit 1
fi
} -
Construct fictitious P-frames from just I-frames [closed]
25 juillet 2024, par nilgirianSome context.. I saw this video recently https://youtu.be/zXTpASSd9xE?si=5alGvZ_e13w0Ahmb it's a continuous zoom into a fractal.


I've been thinking a whole lot of how did they created this video 9 years ago ? The problem is that these frames are mathematically intensive to calculate back then and today still fairly really hard now.


He states in the video it took him 33 hours to generate 1 keyframe.


I was wondering how I would replicate that work. I know by brute force I can generate several images files (essentially each image would be an I-frame) and then ask ffmpeg to compress it into mp4 (where it will convert most of those images into P-frames). I know that. But if I did it that way I calculated it'd take me 6.5 years to render that 9min video (at 30fps, 9 years ago).


So I imagine he only generated I-frames to cut down on time. And then this person somehow created fictitious P-frames in-between. Given that frame-to-frame are similar this seems like it should be doable since you're just zooming in. If he only generated just the I-frames at every 1 second (at 30fps) that work could be cut down to just 82 days.


So if I only want to generate the images that will be used as I-frames could ffmpeg or some other program just automatically make a best guess to generate fictitious P-frames for me ?


-
Collect AVFrames into buffer
24 novembre 2020, par mgukovI'm collect AVFrames into array and then free them but this causes memory leak.



extern "C" {
#include <libavutil></libavutil>frame.h>
#include <libavutil></libavutil>imgutils.h>
}

#include <vector>
#include <iostream>

AVFrame * createFrame() {
 int width = 1280;
 int height = 720;
 AVPixelFormat format = AV_PIX_FMT_YUV420P;
 int buffer_size = av_image_get_buffer_size(format, width, height, 1);
 uint8_t * buffer = (uint8_t *)av_malloc(buffer_size * sizeof(uint8_t));
 memset(buffer, 1, buffer_size);

 uint8_t *src_buf[4];
 int src_linesize[4];
 av_image_fill_arrays(src_buf, src_linesize, buffer, format, width, height, 1);

 AVFrame * frame = av_frame_alloc();
 frame->width = width;
 frame->height = height;
 frame->format = format;
 av_frame_get_buffer(frame, 0);
 av_image_copy(frame->data, frame->linesize,
 const_cast<const>(src_buf), const_cast<const>(src_linesize),
 format, width, height);
 av_free(buffer);
 return frame;
}

int main(int argc, char *argv[]) {
 uint32_t count = 1024;

 // fill array with frames
 std::vector list;
 for (uint64_t i = 0; i < count; ++i) {
 list.push_back(createFrame());
 }
 // allocated 1385 mb in heap

 // clear all allocated data
 for (auto i = list.begin(); i < list.end(); ++i) {
 if (*i != NULL) {
 av_frame_free(&(*i));
 }
 }
 list.clear();

 // memory-leak of > 360 Mb
}
</const></const></iostream></vector>



But if just create frame and immediatly free it without saving it into vector, no memory leak, despite the fact that the same number of frames was created.



What i'm doing wrong ?



UPDATE :



I was wrong. There is no memory leak here(checked by valgrind), but the freed memory does not immediately return to the operating system, this confused me.