
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (10)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (2257)
-
Writing multithreaded video and audio packets with FFmpeg
27 février 2017, par Robert JonesI couldn’t find any information on the way
av_interleaved_write_frame
deals with video and audio packets.I have multiple audio and video packets coming from 2 threads. Each thread calls a
write_video_frame
orwrite_audio_frame
, locks a mutex, initialize an AVPacket and writes data to an .avi file.Initialization of AVCodecContext and AVFOrmatContext is ok.
— Edit 1 —
Audio and video are coming from an external source (microphone and camera) and are captured as raw data without any compression (even for video).
I use h264 to encode video and no compression for Audio (PCM).Audio captured is : 16bits, 44100khz, stereo
Video captured is 25FPS
Question :
1) Is it a problem if I write multiple video packets at once (let’s say 25 packets/sec) and just one audio packet/sec.
Answer : Apparently not, the function
av_interleaved_write_frame
should be able to manage that kind of data as soon as pts and dts is well managedThis means I call
av_interleaved_write_frame
25 times for video writing and just 1 for audio writing per second. Could this be a problem ? If it is how can I deal with this scenario ?2) How can I manage pts and dts in this case ? It seems to be a problem in my application since I cannot correctly render the .avi file. Can I use real time stamps for both video and audio ?
Answer : The best thing to do here is to use the timestamp given when capturing audio / video as pts and dts for this kind of application. So these are not exactly real time stamps (from wall clock) but media capture timestamps.
Thank you for your precious advices.
-
ffmpeg merge separate .webm audio and video files using pts_time possible ?
4 août 2021, par ZachI have a number of audio and video files with different start times and end times. They're all generated from piped input streams (node.js) with .webm format, so the audio files have gaps where there is no audio in the piped stream.


I'm trying to :


- 

- Merge the audio files together with wall-clock correct start/end times
- Merge the video files using hstack.
- Combine the merged audio and merged video into 1 final video with all video/audio.








Right now I'm still stuck on step 1 - Merge the audio


My command that generates separate audio files is :


'-protocol_whitelist',
 'pipe,udp,rtp',
 '-fflags',
 '+genpts',
 '-f',
 'sdp',
 '-use_wallclock_as_timestamps',
 'true',
 '-i',
 'pipe:0'
 '-copyts',
 '-map',
 '0:a:0',
 '-strict'
 '-2',
 '-c:a',
 'copy'



I'd love to combine them somehow using the timestamps of each packet fill the empty space with silence. Right now, I'm putting them together with offsets using the time that I initiate the ffmpeg process from node.js, but these times are incorrect, as it takes a moment for ffmpeg to start up.


Any assistance or a push in the right direction for time sensitive merging of audio/video .webm files with ffmpeg would be outstanding.


Thanks !


PS, here's what I'm currently doing and running into a problems with :


'-i',
 './recordings/audio_1.webm',
 '-i',
 './recordings/audio_2.webm',
 '-filter_complex',
 '[1]adelay=6384|6384[b];[0][b]amix=2',
 './recordings/merged_audio.webm'



The delays are inaccurate (because they're based on an estimate of when the first packet starts) and doesn't account for gaps in the audio files :(


-
FFmpeg : replacing audio in live video stream
28 janvier 2020, par MathijsI’m using FFmpeg to encode and live-stream video captured through a DeckLink capture card. The video from the card comes with an audio stream, but I want to replace the audio stream with another. This other audio stream originates from the same source but is ran through an audio processor that adds a fixed delay. The audio is fed back in the pc that runs FFmpeg through a virtual soundcard (audio over IP, but to Windows it looks like a sound card).
I know how to compensate for this fixed delay, but the issue is that audio and video drift slowly out of sync as the stream runs. I’m assuming this is due to the small difference in clock speeds between the virtual soundcard and the DeckLink card.
I’ve tried the vsync option and the aresample filter in FFmpeg in an attempt to get audio and video to stay synced. However I haven’t succeeded in this yet. Is there a way to make FFmpeg resample the audio and/or drop/dup frames in order to get both streams to stay in sync ?
Currently I’m running this command, which fails to stay in sync.
ffmpeg.exe -f dshow -i audio="WNIP Input 1 (Wheatstone Network Audio (WDM))" -itsoffset 2.3 -f decklink -thread_queue_size 128 -i "DeckLink SDI (3)" -filter_complex "[1:v:0]bwdif,format=yuv420p,setdar=16/9,scale=-1:576:flags=bicubic[vidout];[0:a:0]aresample=min_comp=0.02:comp_duration=15:max_soft_comp=0.005[audioout]" -c:v libx264 -preset slow -crf 25 -maxrate 1200k -bufsize 2400k -map "[vidout]:0" -map "[audioout]:0" -vsync 1 -r 50 -g 90 -keyint_min 90 -sc_threshold 0 -c:a libfdk_aac -b:a 192k -ac 2 -f flv "rtmp://somewhere"