
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (47)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (8911)
-
ffmpeg : use vidstabtransform to overlay it over blurred background
5 novembre 2023, par konewkaI am using
ffmpeg
to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the
vidstabdetect
andvidstabtransform
filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, thevidstabtransform
will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform thevidstabtransform
step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

As a minimal example, these are my commands :


ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - 

ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4

# same for video2.mp4

ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "
 [0:v]split=2[v0blur][v0scale];
 [v0blur]gblur=sigma=50[v0blur]; // blur the video
 [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale]; // scale the video
 [v0blur][v0scale]overlay=x=100:y=200[v0]; // overlay the scaled video over the blur at a specific location
 [1:v]split=2[v1blur][v1scale];
 [v1blur]gblur=sigma=50[v1blur];
 [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
 [v1blur][v1scale]overlay=x=150:y=150[v1];
 [v0][v1]concat=n=2 // concatenate the two clips" 
-c:v libx264 -r 30 out.mp4



So, I know I can put the
vidstabtransform
step into thefilter_complex
-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

EDIT : so to include
vidstabtransform
into the filter graph, it would then look like this :

ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "
 [0:v]vidstabtransform=input=transform1.trf[v0stab]
 [v0stab]split=2[v0blur][v0scale];
 [v0blur]gblur=sigma=50[v0blur];
 [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];
 [v0blur][v0scale]overlay=x=100:y=200[v0];
 [1:v]vidstabtransform=input=transform2.trf[v1stab]
 [v1stab]split=2[v1blur][v1scale];
 [v1blur]gblur=sigma=50[v1blur];
 [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
 [v1blur][v1scale]overlay=x=150:y=150[v1];
 [v0][v1]concat=n=2"
-c:v libx264 -r 30 out.mp4



-
Haskell - Converting multiple images into a video file - ffmpeg-lights' frameWriter-function fails
26 octobre 2017, par oRoleSituation
Currently I am working on an application for image-processing that uses ffmpeg-light to fetch all the frames of a given video-file so that the program afterwards can apply grayscaling, as well as edge detection alogrithms to each of the frames.With the help of friendly stackoverflowers I was able to set up a method capable of converting several images into one video file using ffmpeg-lights’
frameWriter
function.Problem
The application runs fine to the moment it hits theframeWriter
function and I don’t really know why as there are no errors or exception-messages thrown. (OS : Win 10 64bit)What did I try ?
I tried..
different versions of ffmpeg (from 3.2 to 3.4).
ffmpeg.exe using the command line to test if there are any codecs missing, but any conversion I tried worked.
different EncodingParams-combinations : like.. EncodingParams width height fps (Nothing) (Nothing) "medium"
Question
Unfortunately, none of above worked and the web lacks on information to that specific case. Maybe I missed something essential (like ghc flags or something) or made a bigger mistake within my code. That is why I have to ask you : Do you have any suggestions/advice for me ?Haskell Packages
ffmpeg-light-0.12.0
JuicyPixels-3.2.8.3
Code
{--------------------------------------------------------------------------------------------
Applies "juicyToFFmpeg'" and "getFPS" to a list of images and saves the output-video
to a user defined location.
---------------------------------------------------------------------------------------------}
saveVideo :: String -> [Image PixelYA8] -> Int -> IO ()
saveVideo path imgs fps = do
-- program stops after hitting next line --
frame <- frameWriter ep path
------------------------------------------------
Prelude.mapM_ (frame . Just) ffmpegImgs
frame Nothing
where ep = EncodingParams width height fps (Just avCodecIdMpeg4) (Just avPixFmtGray8a) "medium"
width = toCInt $ imageWidth $ head imgs
height = toCInt $ imageHeight $ head imgs
ffmpegImgs = juicyToFFmpeg' imgs
toCInt x = fromIntegral x :: CInt
{--------------------------------------------------------------------------------------------
Converts a single image from JuicyPixel-format to ffmpeg-light-format.
---------------------------------------------------------------------------------------------}
juicyToFFmpeg :: Image PixelYA8 -> (AVPixelFormat, V2 CInt, Vector CUChar)
juicyToFFmpeg img = (avPixFmtGray8a, V2 (toCInt width) (toCInt height), ffmpegData)
where toCInt x = fromIntegral x :: CInt
toCUChar x = fromIntegral x :: CUChar
width = imageWidth img
height = imageHeight img
ffmpegData = VS.map toCUChar (imageData img)
{--------------------------------------------------------------------------------------------
Converts a list of images from JuicyPixel-format to ffmpeg-light-format.
---------------------------------------------------------------------------------------------}
juicyToFFmpeg' :: [Image PixelYA8] -> [(AVPixelFormat, V2 CInt, Vector CUChar)]
juicyToFFmpeg' imgs = Prelude.foldr (\i acc -> acc++[juicyToFFmpeg i]) [] imgs
{--------------------------------------------------------------------------------------------
Simply calculates the FPS for image-to-video conversion.
-> frame :: (Double, DynamicImage) where Double is a timestamp of when it got extracted
---------------------------------------------------------------------------------------------}
getFPS :: [(Double, DynamicImage)] -> Int
getFPS frames = div (ceiling $ lastTimestamp - firstTimestamp) frameCount :: Int
where firstTimestamp = fst $ head frames
lastTimestamp = fst $ last frames
frameCount = length frames -
How to combine many m3u8 hls playlists into one using python
16 juillet 2024, par Дмитрий КравчукI have several videos of mp4 formats. I created HLS (playlists and segments) from them using this code


import ffmpeg

input_file_path = "D:/Projects/test2.mp4"
output_playlist = "'D:/Projects/playlist.m3u8"
segment_time = 5
file_name = "test"

ffmpeg.input(input_file_path).output(
 output_playlist,
 format='hls',
 hls_time=segment_time, 
 hls_list_size=0, 
 hls_playlist_type='vod',
 hls_segment_type='mpegts',
 hls_segment_filename=f'D:/Projects/{file_name}_%03d.ts', 
 force_key_frames=f'expr:gte(t,n_forced*{segment_time})'
).run()



Then I put segments to s3 storage and save m3u8 playlist structure for each video in database.


Now I need on some request combine some video that appropriate user's criteria as one full video and return it. I suppose that I could just combine one common playlist using the videos playlists but it failed. I used this code for this


from m3u8 import loads, 

video_one_m3u8 = """
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.166667,
test1_000.ts
#EXTINF:6.166667,
test1_001.ts
#EXTINF:6.433333,
test1_002.ts
#EXT-X-ENDLIST
"""

video_two_m3u8 = """
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:8
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:8.000000,
test2_000.ts
#EXTINF:8.333333,
test2_001.ts
#EXTINF:5.533333,
test2_002.ts
#EXT-X-ENDLIST
"""

playlist1 = loads(video_one_m3u8)
playlist2 = loads(video_two_m3u8)

combinedPlaylist = M3U8()
combinedPlaylist.segments.extend(playlist2.segments)
combinedPlaylist.segments.extend(playlist1.segments)

combinedPlaylist.version = playlist1.version
combinedPlaylist.target_duration = max(playlist1.target_duration, playlist2.target_duration)
combinedPlaylist.is_endlist = True

with open(r"D:\Projects\playlist_test.m3u8", "w") as f:
 f.writelines(combinedPlaylist.dumps())



The result of combined m3u8 playlist is below


#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.166667,
test1_000.ts
#EXTINF:6.166667,
test1_001.ts
#EXTINF:6.433333,
test1_002.ts
#EXTINF:8.000000,
test2_000.ts
#EXTINF:8.333333,
test2_001.ts
#EXTINF:5.533333,
test2_002.ts
#EXT-X-ENDLIST



When launch this playlist in VLS it works until the edge of another video then it crashes. The VLS log shows this errors


main error: Timestamp conversion failed for 5433334: no reference clock
main error: Could not convert timestamp 0 for FFmpeg
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 17
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 0
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 4096
direct3d11 error: SetThumbNailClip failed: 0x800706f4



I trying to solve this problem of timestamps while preparing initial video playlists using ffmpeg.output argumets (start_at_zero=True and reset_timestamps) but it doesn't help.

Also I was trying to add tag EXT-X-DISCONTINUITY, it works but the video plays as separate (each starts from the beggining) not as common.