
Recherche avancée
Autres articles (45)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (4786)
-
Recieving multiple files from ffmpeg via subprocesses.PIPE
19 août 2014, par Per PlexiI am using ffmpeg to convert a video into images. These images are then processed by my Python program. Originally I used ffmpeg to first save the images to disk, then reading them one by one with Python.
This works fine, but in an effort to speed up the program I am trying to skip the storage step and only work with the images in memory.
I use the following ffmpeg and Python subproccesses command to pipe the output from ffmpeg to Python :
command = "ffmpeg.exe -i ADD\\sg1-original.mp4 -r 1 -f image2pipe pipe:1"
pipe = subprocess.Popen(ffmpeg-command, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
image = Image.new(pipe.communicate()[0])The image variable can then be used by my program. The problem is that if I send more than 1 image from ffmpeg all the data is stored in this variable. I need a way to separate the images. The only way I can think of is splitting on jpeg markers end of file (0xff, 0xd9). This works, but is unreliable.
What have I missed regarding piping files with subproccesses. Is there a way to only read one file at a time from the pipeline ?
-
Receiving multiple files from ffmpeg via subprocesses.PIPE
11 avril 2022, par Per PlexiI am using ffmpeg to convert a video into images. These images are then processed by my Python program. Originally I used ffmpeg to first save the images to disk, then reading them one by one with Python.



This works fine, but in an effort to speed up the program I am trying to skip the storage step and only work with the images in memory.



I use the following ffmpeg and Python subproccesses command to pipe the output from ffmpeg to Python :



command = "ffmpeg.exe -i ADD\\sg1-original.mp4 -r 1 -f image2pipe pipe:1"
pipe = subprocess.Popen(ffmpeg-command, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
image = Image.new(pipe.communicate()[0])




The image variable can then be used by my program. The problem is that if I send more than 1 image from ffmpeg all the data is stored in this variable. I need a way to separate the images. The only way I can think of is splitting on jpeg markers end of file (0xff, 0xd9). This works, but is unreliable.



What have I missed regarding piping files with subproccesses. Is there a way to only read one file at a time from the pipeline ?


-
How can I combine multiple bitmap frames using the ffmpeg library in cpp ?
20 février 2021, par Evan PaciniOk so I've been making a program in cpp that that a 24-bit (or less) bitmap and through checking its luminance converts it into a 1-bit bitmap, which works totally fine... However I've also been using ffmpeg in my cpp code (through the use of system()) to compile multiple frames with different luminances in order to make a video from it.


//Compile to video
char cmd[168];
std::strcpy(cmd, "ffmpeg -y -framerate ");
std::strcat(cmd, framerate);
std::strcat(cmd, " -i ");
std::strcat(cmd, outFileName);
std::strcat(cmd, "/frm%d.bmp -pix_fmt yuv420p -vf \"pad=ceil(iw/2)*2:ceil(ih/2)*2\" ");
std::strcat(cmd, outFileName);
std::strcat(cmd, ".mp4");
std::system(cmd);



This works fine however as bitmap is uncompressed and the number of frames can get quite large, I've been trying to improve the performance so the program takes less long to run. This is why I want to use the ffmpeg library instead of the executable, which will also make the program more portable, and hopefully, more performant.


However, I find the documentation of the libraries quite intricate and the examples quite limited, as most of them explain how to extract frames from a video while I intend to compile frames into a video...


So does any of you know how to do that ?