
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (60)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (9437)
-
Web camera Logitech and Linux
8 novembre 2019, par Nick SawI have Logitech C310 camera with the declared characteristics of 720p 30fps.
If you connect the camera to windows, the recording is fully consistent with the stated 720p 30fps - the picture is clear.
The challenge is to connect the same camera to OrangePI (server Armbian) and to save video files on it.
The camera appears as /dev/video0.
sudo ffmpeg -f v4l2 -s 1280x720 -i /dev/video0 output.wmv
As a result, I get a crumbly picture with a frequency of 5 fps.
Maybe I’m using ffmpeg incorrectly ? Please help me who has experience with Web cameras on Linux ...
Thanks in advance.USB-camera configuration :
v4l2-ctl --all --device=/dev/video0
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:081b)
Bus info : usb-1c1c000.usb-1
Driver version: 4.14.18
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line : 2560
Size Image : 1843200
Colorspace : sRGB
Transfer Function : Default
YCbCr/HSV Encoding: Default
Quantization : Default
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1280, Height 720
Default : Left 0, Top 0, Width 1280, Height 720
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1280, Height 720
Selection: crop_bounds, Left 0, Top 0, Width 1280, Height 720
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 5.000 (5/1)
Read buffers : 0
brightness (int) : min=0 max=255 step=1 default=128 value=128
contrast (int) : min=0 max=255 step=1 default=32 value=32
saturation (int) : min=0 max=255 step=1 default=32 value=32
white_balance_temperature_auto (bool) : default=1 value=1
gain (int) : min=0 max=255 step=1 default=64 value=192
power_line_frequency (menu) : min=0 max=2 default=2 value=2
white_balance_temperature (int) : min=0 max=10000 step=10 default=4000 value=4610 flags=inactive
sharpness (int) : min=0 max=255 step=1 default=24 value=24
backlight_compensation (int) : min=0 max=1 step=1 default=0 value=0
exposure_auto (menu) : min=0 max=3 default=3 value=3
exposure_absolute (int) : min=1 max=10000 step=1 default=166 value=249 flags=inactive
exposure_auto_priority (bool) : default=0 value=1
led1_mode (menu) : min=0 max=3 default=3 value=3
led1_frequency (int) : min=0 max=131 step=1 default=0 value=0 -
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
5 mai 2021, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.



The summarized pseudocode of the main part of the program is the following one :



std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
 generateframe(B, i); // Returns different images for different i values.
 sprintf(s, "IMG_%d.png", i+1);
 WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>



The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.



In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).



The pseudocode of what I would like to obtain is something similar to this :



std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>



According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).



Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.



Edit 1 :



As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :



FILE *fd;
mkfifo("myfifo", 0666);

for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
 fflush(fd);
 fd=fclose("myfifo");
}
unlink("myfifo");
</int>



WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.



Then I compile and write the following command in the Cygwin terminal :



./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4




However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?


-
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
6 mai 2021, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.



The summarized pseudocode of the main part of the program is the following one :



std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
 generateframe(B, i); // Returns different images for different i values.
 sprintf(s, "IMG_%d.png", i+1);
 WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>



The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.



In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).



The pseudocode of what I would like to obtain is something similar to this :



std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>



According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).



Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.



Edit 1 :



As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :



FILE *fd;
mkfifo("myfifo", 0666);

for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
 fflush(fd);
 fd=fclose("myfifo");
}
unlink("myfifo");
</int>



WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.



Then I compile and write the following command in the Cygwin terminal :



./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4




However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?