
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (80)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (10242)
-
How to apply 2 filters drawtext and drawbox using FFMPEG
9 juillet 2017, par user6972I’m having problems combining filters. I’m trying to take video from the camera, apply a timer on it and also overlay a box in the center. I can put a time code (local time and pts) using the -vf drawtext command no problems :
ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 \
-vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
text='%{localtime} %{pts\:hms}': fontcolor=white: fontsize=24: box=1: \
boxcolor=black@0.8: boxborderw=5: x=0: y=0" -vcodec libx264 \
-preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4Then I have one that draws a small box using drawbox :
ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 \
-filter_complex " drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": \
-vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4I assumed I could combine these with the filter_complex switch and separate them using the semicolon like this
ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='%{localtime} %{pts\:hms}': fontcolor=white: fontsize=24: box=1: boxcolor=black@0.8;drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": -vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
But it fails to find the input stream on the second filter :
Input #0, video4linux2,v4l2, from ’/dev/video0’ :
Duration : N/A, start : 10651.720690, bitrate : N/A
Stream #0:0 : Video : mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1280x720, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_drawbox_1
I tried to direct it to [0] like this :
ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex " \
drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
text='%{localtime} %{pts\:hms}': fontcolor=white: fontsize=24: box=1: \
boxcolor=black@0.8;[0] drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": \
-vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4But it fails to put the box on the output.
So I tried to split streams like this
ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex " \
split [main][tmp];\
[main] drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
text='%{localtime} %{pts\:hms}': fontcolor=white: fontsize=24: box=1: boxcolor=black@0.8 [tmp];\
[main] drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1 [tmp2]; [tmp][tmp2] overlay": \
-vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4But my build doesn’t have the overlay filter complied with it. At this point I decided to stop and ask if I’m making this harder than it should be. The end result is I just want a timer and a box drawn on the video. Is there a better way or a formatting trick to do this ?
Thanks
-
avcodec/libvpxenc : Apply codec options to alpha codec context
3 septembre 2021, par Adam Chelminskiavcodec/libvpxenc : Apply codec options to alpha codec context
When encoding yuva420 (alpha) frames, the vpx encoder uses a second
vpx_codec_ctx to encode the alpha stream. However, codec options were
only being applied to the primary encoder. This patch updates
codecctl_int and codecctl_intp to also apply codec options to the alpha
codec context when encoding frames with alpha.This is necessary to take advantage of libvpx speed optimizations
such as 'row-mt' and 'cpu-used' when encoding videos with alpha.
Without this patch, the speed optimizations are only applied to the
primary stream encoding, and the overall encoding is just as slow
as it would be without the options specified.Signed-off-by : Adam Chelminski <chelminski.adam@gmail.com>
Signed-off-by : James Zern <jzern@google.com> -
How to apply dynamic watermarking for users watching video in real-time ? [closed]
3 janvier, par Barun BhattacharjeeI am working on a video streaming project where I need to apply a dynamic watermarking (e.g., username and email) in real-time for security purposes. The video is being streamed in DASH format, and the segment files are in .m4s format generated via FFmpeg.


Challenges :
Is it possible to directly apply dynamic watermarking to .m4s segment files ?


Video segments are generated using FFmpeg with the following command :


ffmpeg
 .input(video_path)
 .output(mpd_path,
 format='dash',
 map='0',
 video_bitrate='2400k',
 video_size='1920x1080',
 vcodec='libx264',
 seg_duration='4', # Sets segment duration to 4 seconds
 acodec='copy')
 .run()




What I tried :
I attempted to use FFmpeg to apply a watermark dynamically to the .m4s files using the drawtext filter, but .m4s files are not always recognized as valid input for FFmpeg operations.


# FFmpeg command to add watermark to m4s file
try:
 # FFmpeg processing
 out, err = (
 ffmpeg
 .input(m4s_file_path) # Input the segment file
 .filter(
 "drawtext",
 text=user_info,
 fontfile="font/dejavu-sans/DejaVuSans-Bold.ttf",
 fontsize=24,
 fontcolor="white",
 x=10,
 y=10
 )
 .output(
 "pipe:", # Stream output as a byte stream
 format="mp4", # Output format as MP4 (compatible with MPEG-DASH)
 vcodec="libx264",
 acodec="copy",
 movflags="frag_keyframe+empty_moov"
 )
 .run(capture_stdout=True, capture_stderr=True)
 )

 logger.info(f"FFmpeg process completed. stdout length: {len(out)}, stderr: {err.decode('utf-8')}")
 logger.error(f"FFmpeg stderr: {err.decode('utf-8')}")
 return out # Return the processed video stream data


except ffmpeg.Error as e:
 stderr_output = e.stderr.decode('utf-8') if e.stderr else "No stderr available"
 logger.error(f"FFmpeg error: {stderr_output}")

 raise RuntimeError(f"Error processing video: {stderr_output}")




Error I faced :


video-streaming-backend | [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f1bf99cc640] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none(tv, bt709), 1920x1012): unspecified pixel format
video-streaming-backend | Consider increasing the value for the 'analyzeduration' (10000000) and 'probesize' (5000000) options
video-streaming-backend | Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'http://web:8000/media/stream_video/chunks/ec1db006-b488-47ad-8220-79a05bcaae39/segments/init-stream0.m4s':
video-streaming-backend | Metadata:
video-streaming-backend | major_brand : iso5
video-streaming-backend | minor_version : 512
video-streaming-backend | compatible_brands: iso5iso6mp41
video-streaming-backend | encoder : Lavf60.16.100
video-streaming-backend | Duration: N/A, bitrate: N/A
video-streaming-backend | Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none(tv, bt709), 1920x1012, SAR 1:1 DAR 480:253, 12288 tbr, 12288 tbn (default)
video-streaming-backend | Metadata:
video-streaming-backend | handler_name : VideoHandler
video-streaming-backend | vendor_id : [0][0][0][0]
video-streaming-backend | Stream mapping:
video-streaming-backend | Stream #0:0 (h264) -> drawtext:default
video-streaming-backend | drawtext:default -> Stream #0:0 (libx264)
video-streaming-backend | Press [q] to stop, [?] for help
video-streaming-backend | Cannot determine format of input stream 0:0 after EOF
video-streaming-backend | Error marking filters as finished
video-streaming-backend | Error while filtering: Invalid data found when processing input
video-streaming-backend | [out#0/mp4 @ 0x7f1bf8e73100] Nothing was written into output file, because at least one of its streams received no packets.
video-streaming-backend | frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=N/A bitrate=N/A speed=N/A 
video-streaming-backend | Conversion failed!




These errors have left me wondering if .m4s is a viable format for dynamic watermarking. If it's not, what would be the correct approach ?