
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (25)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (6475)
-
Sending per frame metadata with H264 encoded frames
3 août 2021, par user2459280We're looking for a way to send per frame metadata (for example an ID) with H264 encoded frames from a server to a client.



We're currently developing a remote rendering application, where both client and server side are actively involved.
The server renders a high quality image with all effects, lighting etc.
The client also has model-informations and renders a diffuse image that is used when the bandwidth is too low or the images have to be warped in order to avoid stuttering .



So far we're encoding the frames on the server side with ffmpeg and streaming them with live555 to the client, who receives an rtsp-stream and decodes the frames again using ffmpeg.



For our application, we now need to send per frame metadata.
We want the client to tell the server where the camera is right now. 
Ideally we'd be able to send the client's view matrix to the server, render the corresponding frame and send it back to the client together with its view matrix. So when the client receives a frame, we need to know exactly at what camera position the frame was rendered.



Alternatively we could also tag each view matrix with an ID, send it to the server, render the frame and tag it with the same ID and send it back. In this case we'd have to assign the right matrix to the frame again on the client side.



After several attempts to realize the above intent with ffmpeg we came to the conclusion that ffmpeg does not provide the required functionality. ffmpeg only provides a fix, predefined set of fields for metadata, that either cannot store a matrix or can only be set for every key frame, which is not frequently enough for our purpose.



Now we're considering using live555. So far we have an on demand Server, witch gets a VideoSubsession with a H264VideoStreamDiscreteFramer to contain our own FramedSource class. In this class we load the encoded AVPacket (from ffmpeg) and send its data-buffer over the network. Now we need a way to send some kind of metadata with every frame to the client.



Do you have any ideas how to solve this metadata problem with live555 oder another library ?



Thanks for your help !


-
ffmpeg-python video conversion error : malloc of size 36254012 failed on Raspberry Pi
23 novembre 2021, par Bob SmithI am trying to convert a .mkv file into .mp4 format using the ffmpeg-python python library. I have been able to run my script on a Windows machine repeatedly without any issues, however when I run the same script on a Raspberry Pi 4 B I am faced with the same error again and again. Similar to this post, I am faced with the following message :


x264 [error]: malloc of size 36254012 failed
Video encoding failed



I have tried setting
max_muxing_queue_size
to9999
as heard this might fix the issue, it had no effect. I tried increasing the gpu RAM from the default 128 MB to as much as 512 MB (I have the 8 GB RAM model so I was not concerned about overall system memory), needless to say this had no effect either.

Finally, I read on a forumn post from someone else with the same error that decresing the number of threads ffmpeg uses to 1 might also solve this issue. This did in fact solve my problem, unfortunately it decreases the speed of the process to a crawl of what it would otherwise be. I was hoping someone might have another idea of how to fix this that would still allow for multiple threads to be used by ffmpeg, or at least have some idea as to what might be causing this issue.


Also it's not particularly useful, but for reference the line that is causing the exception in the code is the following :


ffmpeg.input(video_file).output(out_name, **{'max_muxing_queue_size': '9999'}).run()



-
libavfilter : Add derain filter
30 mai 2019, par Xuewei Menglibavfilter : Add derain filter
Remove the rain in the input image/video by applying the derain
methods based on convolutional neural networks. Training scripts
as well as scripts for model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.Signed-off-by : Xuewei Meng <xwmeng96@gmail.com>