
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (50)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (7173)
-
Crop individual frames of a video and then concat for output
27 juin 2024, par Ashish PadaveI want to Crop individual frames of a video and then concat for output. This works with 2 ffmpeg commands. The first one extracts each frame and the second concats them.
I want to get it done without the intermediate frames.


Tried with the following


ffmpeg -y -i input.mp4 -filter_complex "[0:v]split=890[v0][v1][v2][v3][v4][v5];[v0]select='eq(n\,0)',setpts=PTS-STARTPTS,crop=404:720:225:0[v0]; [v1]select='eq(n\,1)',setpts=PTS-STARTPTS,crop=404:720:225:0[v1]; [v2]select='eq(n\,2)',setpts=PTS-STARTPTS,crop=404:720:225:0[v2]; [v3]select='eq(n\,3)',setpts=PTS-STARTPTS,crop=404:720:225:0[v3]; [v4]select='eq(n\,4)',setpts=PTS-STARTPTS,crop=404:720:225:0[v4]; [v5]select='eq(n\,5)',setpts=PTS-STARTPTS,crop=404:720:225:0[v5];[v0][v1][v2][v3][v4][v5]concat=n=6:v=1:a=0[outv]" -map "[outv]" -map 0:a? -c:a copy -vsync 2 output.mp4



The above is an abridged version of the command. The video I am working with has 890 frames and a frame rate of 25.


The output log with 890 frames is


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf59.5.100
 Duration: 00:00:35.61, start: 0.000000, bitrate: 3380 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 3246 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
Stream mapping:
 Stream #0:0 (h264) -> split
 concat -> Stream #0:0 (libx264)
 Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libx264 @ 0x55963ad4b700] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x55963ad4b700] profile High, level 3.0
[libx264 @ 0x55963ad4b700] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 404x720, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
frame= 3 fps=0.6 q=-1.0 Lsize= 586kB time=00:00:35.59 bitrate= 134.9kbits/s dup=0 drop=887 speed=6.73x
video:21kB audio:558kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.278277%
[libx264 @ 0x55963ad4b700] frame I:1 Avg QP:23.09 size: 17546
[libx264 @ 0x55963ad4b700] frame P:1 Avg QP:25.73 size: 2069
[libx264 @ 0x55963ad4b700] frame B:1 Avg QP:26.34 size: 845
[libx264 @ 0x55963ad4b700] consecutive B-frames: 33.3% 66.7% 0.0% 0.0%
[libx264 @ 0x55963ad4b700] mb I I16..4: 8.4% 68.9% 22.7%
[libx264 @ 0x55963ad4b700] mb P I16..4: 0.1% 0.6% 0.3% P16..4: 28.5% 5.0% 3.9% 0.0% 0.0% skip:61.6%
[libx264 @ 0x55963ad4b700] mb B I16..4: 0.0% 0.1% 0.0% B16..8: 29.6% 3.4% 0.3% direct: 0.2% skip:66.4% L0:59.3% L1:35.6% BI: 5.1%
[libx264 @ 0x55963ad4b700] 8x8 transform intra:68.8% inter:73.3%
[libx264 @ 0x55963ad4b700] coded y,uvDC,uvAC intra: 73.0% 80.1% 36.4% inter: 3.9% 7.0% 0.7%
[libx264 @ 0x55963ad4b700] i16 v,h,dc,p: 2% 74% 6% 18%
[libx264 @ 0x55963ad4b700] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 36% 20% 8% 4% 2% 6% 4% 8%
[libx264 @ 0x55963ad4b700] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 41% 10% 4% 7% 4% 5% 3% 5%
[libx264 @ 0x55963ad4b700] i8c dc,h,v,p: 44% 37% 14% 6%
[libx264 @ 0x55963ad4b700] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x55963ad4b700] kb/s:1364.00



So it is basically dropping 887 frames. The output file has the full audio but no video.
Is this even possible ?


-
Using ffprobe/ffmpeg to extract individual frames and types in playback sequence
24 janvier 2018, par HélderI am able to extract keyframes using ffmpeg. Something like this that I have been using :
ffmpeg -i input.mp4 -vf "select='eq(pict_type\,I)" -vsync vfr -qscale:v 2 I-thumbnails-%02d.png -vf "select='eq(pict_type\,B)" -vsync vfr -qscale:v 2 B-thumbnails-%02d.png -vf "select='eq(pict_type\,P)" -vsync vfr -qscale:v 2 P-thumbnails-%02d.png
Now the issue is, I would like these extracted frames to be in playback sequence, if possible, the way they are extracted should have a timestamp or any way to know that they start in a certain sequence, example, from start to end :
IBBBIPPB......BI
but in a way that I can sort the frames in the playback sequence.
I want to use this to load in python to extract motion vectors but they should all follow certain playback sequence. Any help ?
-
Live streaming of processed frames to AWS
22 avril 2021, par MinasChamI'm working on a project where i need to capture live video feed from an RTSP camera source, process the video frame-by-frame and stream the result to an AWS Service.


So far, my solution :


- 

- Captures frames from the RTSP camera source using
OpenCV
and performs some processing. - Feeds the processed frames to an
ffmpeg
pipe that packages the content for online streaming (HTTP Live Streaming - hls
) and saves it locally. - Transfers the media content to an Amazon Kinesis Video Stream using a
Gstreamer
pipeline element withkvssink
as a sink element.








My questions are :


- 

- Currently I'm saving the content both locally and on an Amazon Kinesis Video Stream. Is this efficient ?
- Is it possible to directly stream the frames to the Amazon kinesis Video Stream (perhaps by connecting the
ffmpeg
output with thegstreamer
pipeline element) ? - Is the file format suitable for this implementation or would it better to encode the media differently ?








- Captures frames from the RTSP camera source using