
Recherche avancée
Autres articles (111)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (9425)
-
Transcoding WAV audio to AAC in an MP4 container using FFmpeg C API
7 novembre 2022, par vstrom coderI'm trying to compose source video and audio into a final MP4 video.
I have a problem with the WAV audio. After decoding and filtering, I'm getting an error from the output encoder :
[aac @ 0x145e04c40] more samples than frame size


I initially used the following filter graph (minimal reproducible example) :


abuffer -> aformat -> abuffersink



At this point I was getting the error mentioned above.


Then, I tried to insert a
aresample
filter to the graph :

abuffer -> aresample -> aformat -> abuffersink



But still getting the same error.
This was based on the fact that the ffmpeg CLI uses this filter when converting WAV to MP4 :


Command :


ffmpeg -i source.wav output.mp4 -loglevel debug



Output contains :


[graph_0_in_0_0 @ 0x138f06200] Setting 'time_base' to value '1/44100'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_rate' to value '44100'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'sample_fmt' to value 's16'
 [graph_0_in_0_0 @ 0x138f06200] Setting 'channel_layout' to value 'mono'
 [graph_0_in_0_0 @ 0x138f06200] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:mono
 [format_out_0_0 @ 0x138f06620] Setting 'sample_fmts' to value 'fltp'
 [format_out_0_0 @ 0x138f06620] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
 [format_out_0_0 @ 0x138f06620] auto-inserting filter 'auto_aresample_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_0'
 [AVFilterGraph @ 0x138f060f0] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
 [auto_aresample_0 @ 0x138f06c30] [SWR @ 0x120098000] Using s16p internally between filters
 [auto_aresample_0 @ 0x138f06c30] ch:1 chl:mono fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:fltp r:44100Hz
 Output #0, mp4, to 'output.mp4':
 Metadata:
 encoder : Lavf59.27.100
 Stream #0:0, 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, delay 1024, 69 kb/s
 Metadata:
 encoder : Lavc59.37.100 aac



I'm trying to figure out whether I should use the SWR library directly as exemplified in the transcode_aac example.


-
Efficient real-time video stream processing and forwarding with RTMP servers
19 mai 2023, par dumbQuestionsI have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).


Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.


Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag ?


Thanks in advance !


def forward_stream(server_url, stream_key, twitch_stream_key):
 get_ffmpeg_command = [...]

 send_ffmpeg_command [...]

 # Start get FFmpeg process
 read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Start send FFmpeg process
 send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)

 # Open video capture
 cap = cv2.VideoCapture(f'{server_url}')

 while True:
 # Read the frame
 ret, frame = cap.read()
 if ret:
 # Apply machine learning algorithm
 should_blur = machine_learning_algorithm(frame)

 # Apply blur if necessary
 if machine_learning_algorithm(frame):
 frame = cv2.blur(frame, (25, 25))

 # Write the frame to FFmpeg process
 send_process.stdin.write(frame.tobytes())
 else:
 break

 # Release resources
 cap.release()
 read_process.stdin.close()
 read_process.wait()




-
Documented ffmpeg commands not recognized by ffmpeg
2 avril 2020, par agcontiI'm trying to use options like,
ldash
andhttp_opts
, as the dash muxer docs describe but FFmpeg doesn't recognize them. I'm on the latest released version of ffmpeg, v4.2.2. I see the changes in the ffmpeg master branch but not in the v4.2 release branch. Does ffmpeg not recognize them because they haven't been released yet ?


Here's the dash muxer docs for reference : https://ffmpeg.org/ffmpeg-all.html#dash-2



Here's a minimal example command with uncut output :



Andrews-MacBook-Pro :: dev/test ‹master› » ffmpeg -re -i test.mp4 \ 
-map 0 -map 0 -c:a libfdk_aac -c:v libx264 \
-b:v:0 800k -b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline \
-profile:v:0 main -bf 1 \
-b_strategy 0 -ar:a:1 22050 \
-adaptation_sets "id=0,streams=v id=1,streams=a" \
-ldash 1 \
-f dash ./output/out.mpd

ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Unrecognized option 'ldash'.
Error splitting the argument list: Option not found