
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (11309)
-
Audio Slowly Desynchronizing When Segmenting
14 avril 2018, par NimbleI use ffmpeg’s ability to segment video while I record so I can record constantly without my hard drive filling up.
It works really well, expect the audio desynchronizes from the video when the file segments. The video seems to be uninterrupted but I can actually hear a tiny jump in the audio when I join segments later on. One would think that ffmpeg would store packets in a queue during segmentation so nothing is lost but that doesn’t seem to be the case... Any way I could force it to do something like that ?
Here is my current block :
ffmpeg -y -thread_queue_size 5096 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M -framerate 100 -pixel_format nv12 ^
-itsoffset 00:00:00.012 -i video="Video (00 Pro Capture HDMI 4K+)" -thread_queue_size 5096 -guess_layout_max 0 -f dshow ^
-rtbufsize 2147.48M -i audio="SPDIF/ADAT (1+2) (RME Fireface UC)" -map 0:0,1:0 -map 1:0 -c:v h264_nvenc -preset: llhp ^
-pix_fmt nv12 -b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -b:a 384k -ac 2 -r 100 -vsync 1 ^
-max_muxing_queue_size 5096 -segment_time 600 -segment_wrap 9 -f segment C:\Users\djcim\Videos\PC\PC\PC%02d.mp4I am delaying the video stream because right out the gate it’s a little bit ahead of the audio.
PS : aresample or async seem to have no effect or at least not a desirable one.
-
kmsgrab : Use GetFB2 if available
5 juillet 2020, par Mark Thompsonkmsgrab : Use GetFB2 if available
The most useful feature here is the ability to automatically extract the
framebuffer format and modifiers. It also makes support for multi-plane
framebuffers possible, though none are added to the format table in this
patch.This requires libdrm 2.4.101 (from April 2020) to build, so it includes a
configure check to allow compatibility with existing distributions. Even
with libdrm support, it still won't do anything at runtime if you are
running Linux < 5.7 (before June 2020). -
Using ffmpeg on RTOS
7 avril 2015, par DhirajI am trying to capture video and audio from a webcam and stream it wirelessly through a software defined radio. Essentially, I need to packetize the video stream so that it is suitable for the transport layer implemented in INEGRITY OS running on an ARM processor. While I am able to capture the video and transmit it wirelessly, on the receiving end, when I try to view the video using ffplay, the quality is very bad. Ugly green patches and video tearing. Do pardon my ignorance but ffmpeg is not my forte. This is how I am sending the video :
ffmpeg -rtbufsize 1500M -f dshow -i video="Vimicro USB Camera (Altair)":audio="Microphone (Realtek High Definition Audio)" -r 10 -vcodec libx264 -threads 0 -crf 23 -preset ultrafast -tune zerolatency -acodec libmp3lame -b 600k -flush_packets 0 -f mpegts udp://192.9.200.254:8000?pkt_size=1128
On the receiver end, I run ffplay using the following command :
ffplay udp://192.9.200.69:8000
Importantly, the video from the USB camera is sent over ethernet to an ARM processor running INTEGRITY RTOS. The transport layer of the Software Defined Radio is implemented in the RTOS. This is where the video data is multiplexed with other application data being transmitted through the SDR and hence a stringent requirement on the packet size (1128 bytes). From the ARM processor, the data packet is sent on to a DSP where the network and DLL layer are implemented and finally on to a FPGA where the PHY layer is implemented.
Apart from using ffplay, I have also tried mplayer, however, the video output is equally bad.
Any help would be greatly appreciated.