
Recherche avancée
Autres articles (74)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (10557)
-
Why is the last frame not showing in an MP4 generated by libav ?
31 octobre 2023, par BenNote : I have a working example of the problem here.


I'm using the libav/ffmpeg API to generate an MP4 with the h264 codec. In my specific situation I'm generating the files with a max number of 2 "B" frames. I'm able to generate an Mp4 with the right number of frames so that a single, lone "B" frame is the very last frame being written. When this happens, the encoder sets that frame's packet to be discarded (I've verified this with ffprobe). The net result is that some players (say, when dropping the MP4 into Edge or Chrome) will display only n-1 total frames (ignoring the discarded packet). Other players, such as VLC, will play the full n frames (not ignoring the discarded packet). So, the result is inconsistent.


ffmpeg.exe itself doesn't appear to have this problem. Instead, it will set what would be the lone "B" frame to a "P" frame. This means the file will play the same regardless of what player is used.


The problem is : I don't know how to mimic ffmpeg's behavior using the SDK so the last frame will play regardless of the player. As far as I can tell I'm closing out the file properly by flushing out the encoder buffers. I must be doing something wrong somewhere.


I provided a link to the full source above, but at a high level I'm initializing the codec context and stream like this :


newStream->codecpar->codec_id = AV_CODEC_ID_H264;
newStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
newStream->codecpar->width = Width;
newStream->codecpar->height = Height;
newStream->codecpar->format = AV_PIX_FMT_YUV420P;
newStream->time_base = { 1, 75 };
avcodec_parameters_to_context(codecContext, newStream->codecpar);


codecContext->time_base = { 1, 75 };
codecContext->gop_size = 30;



I then sit in a loop and use OpenCV to generate frames (each frame has its frame number drawn on it) :


auto matrix = cv::Mat(Height, Width, CV_8UC3, cv::Scalar(0, 0, 0));

std::stringstream ss;
ss << f; 

cv::putText(matrix, ss.str().c_str(), textOrg, fontFace, fontScale, cv::Scalar(255, 255, 255), thickness, 8);



I then write out the frame like this (looping if more data is needed) :


if ((ret = avcodec_send_frame(codecContext, frame)) == 0) {

 ret = avcodec_receive_packet(codecContext, &pkt); 

 if (ret == AVERROR(EAGAIN))
 {
 continue; 
 }
 else
 {
 av_interleaved_write_frame(pFormat, &pkt);
 }
 av_packet_unref(&pkt);
}



And finally I flush out the file at the end like this :


if ((ret = avcodec_send_frame(codecContext, NULL)) == 0)
{
 for (;;)
 {
 if ((ret = avcodec_receive_packet(codecContext, &pkt)) == AVERROR_EOF)
 {
 break;
 }
 else
 {
 ret = av_interleaved_write_frame(pFormat, &pkt);
 av_packet_unref(&pkt);
 }
 }

 av_write_trailer(pFormat);
 avio_close(pFormat->pb);
}



Yet when I play in Chrome, the player ends on frame 6758,




And in VLC, the player ends on frame 6759.




What am I doing wrong ?


-
How to combine many m3u8 hls playlists into one using python
16 juillet 2024, par Дмитрий КравчукI have several videos of mp4 formats. I created HLS (playlists and segments) from them using this code


import ffmpeg

input_file_path = "D:/Projects/test2.mp4"
output_playlist = "'D:/Projects/playlist.m3u8"
segment_time = 5
file_name = "test"

ffmpeg.input(input_file_path).output(
 output_playlist,
 format='hls',
 hls_time=segment_time, 
 hls_list_size=0, 
 hls_playlist_type='vod',
 hls_segment_type='mpegts',
 hls_segment_filename=f'D:/Projects/{file_name}_%03d.ts', 
 force_key_frames=f'expr:gte(t,n_forced*{segment_time})'
).run()



Then I put segments to s3 storage and save m3u8 playlist structure for each video in database.


Now I need on some request combine some video that appropriate user's criteria as one full video and return it. I suppose that I could just combine one common playlist using the videos playlists but it failed. I used this code for this


from m3u8 import loads, 

video_one_m3u8 = """
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.166667,
test1_000.ts
#EXTINF:6.166667,
test1_001.ts
#EXTINF:6.433333,
test1_002.ts
#EXT-X-ENDLIST
"""

video_two_m3u8 = """
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:8
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:8.000000,
test2_000.ts
#EXTINF:8.333333,
test2_001.ts
#EXTINF:5.533333,
test2_002.ts
#EXT-X-ENDLIST
"""

playlist1 = loads(video_one_m3u8)
playlist2 = loads(video_two_m3u8)

combinedPlaylist = M3U8()
combinedPlaylist.segments.extend(playlist2.segments)
combinedPlaylist.segments.extend(playlist1.segments)

combinedPlaylist.version = playlist1.version
combinedPlaylist.target_duration = max(playlist1.target_duration, playlist2.target_duration)
combinedPlaylist.is_endlist = True

with open(r"D:\Projects\playlist_test.m3u8", "w") as f:
 f.writelines(combinedPlaylist.dumps())



The result of combined m3u8 playlist is below


#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.166667,
test1_000.ts
#EXTINF:6.166667,
test1_001.ts
#EXTINF:6.433333,
test1_002.ts
#EXTINF:8.000000,
test2_000.ts
#EXTINF:8.333333,
test2_001.ts
#EXTINF:5.533333,
test2_002.ts
#EXT-X-ENDLIST



When launch this playlist in VLS it works until the edge of another video then it crashes. The VLS log shows this errors


main error: Timestamp conversion failed for 5433334: no reference clock
main error: Could not convert timestamp 0 for FFmpeg
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 17
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 0
ts error: libdvbpsi error (PSI decoder): TS discontinuity (received 0, expected 5) for PID 4096
direct3d11 error: SetThumbNailClip failed: 0x800706f4



I trying to solve this problem of timestamps while preparing initial video playlists using ffmpeg.output argumets (start_at_zero=True and reset_timestamps) but it doesn't help.

Also I was trying to add tag EXT-X-DISCONTINUITY, it works but the video plays as separate (each starts from the beggining) not as common.

-
Haskell - Converting multiple images into a video file - ffmpeg-lights' frameWriter-function fails
26 octobre 2017, par oRoleSituation
Currently I am working on an application for image-processing that uses ffmpeg-light to fetch all the frames of a given video-file so that the program afterwards can apply grayscaling, as well as edge detection alogrithms to each of the frames.With the help of friendly stackoverflowers I was able to set up a method capable of converting several images into one video file using ffmpeg-lights’
frameWriter
function.Problem
The application runs fine to the moment it hits theframeWriter
function and I don’t really know why as there are no errors or exception-messages thrown. (OS : Win 10 64bit)What did I try ?
I tried..
different versions of ffmpeg (from 3.2 to 3.4).
ffmpeg.exe using the command line to test if there are any codecs missing, but any conversion I tried worked.
different EncodingParams-combinations : like.. EncodingParams width height fps (Nothing) (Nothing) "medium"
Question
Unfortunately, none of above worked and the web lacks on information to that specific case. Maybe I missed something essential (like ghc flags or something) or made a bigger mistake within my code. That is why I have to ask you : Do you have any suggestions/advice for me ?Haskell Packages
ffmpeg-light-0.12.0
JuicyPixels-3.2.8.3
Code
{--------------------------------------------------------------------------------------------
Applies "juicyToFFmpeg'" and "getFPS" to a list of images and saves the output-video
to a user defined location.
---------------------------------------------------------------------------------------------}
saveVideo :: String -> [Image PixelYA8] -> Int -> IO ()
saveVideo path imgs fps = do
-- program stops after hitting next line --
frame <- frameWriter ep path
------------------------------------------------
Prelude.mapM_ (frame . Just) ffmpegImgs
frame Nothing
where ep = EncodingParams width height fps (Just avCodecIdMpeg4) (Just avPixFmtGray8a) "medium"
width = toCInt $ imageWidth $ head imgs
height = toCInt $ imageHeight $ head imgs
ffmpegImgs = juicyToFFmpeg' imgs
toCInt x = fromIntegral x :: CInt
{--------------------------------------------------------------------------------------------
Converts a single image from JuicyPixel-format to ffmpeg-light-format.
---------------------------------------------------------------------------------------------}
juicyToFFmpeg :: Image PixelYA8 -> (AVPixelFormat, V2 CInt, Vector CUChar)
juicyToFFmpeg img = (avPixFmtGray8a, V2 (toCInt width) (toCInt height), ffmpegData)
where toCInt x = fromIntegral x :: CInt
toCUChar x = fromIntegral x :: CUChar
width = imageWidth img
height = imageHeight img
ffmpegData = VS.map toCUChar (imageData img)
{--------------------------------------------------------------------------------------------
Converts a list of images from JuicyPixel-format to ffmpeg-light-format.
---------------------------------------------------------------------------------------------}
juicyToFFmpeg' :: [Image PixelYA8] -> [(AVPixelFormat, V2 CInt, Vector CUChar)]
juicyToFFmpeg' imgs = Prelude.foldr (\i acc -> acc++[juicyToFFmpeg i]) [] imgs
{--------------------------------------------------------------------------------------------
Simply calculates the FPS for image-to-video conversion.
-> frame :: (Double, DynamicImage) where Double is a timestamp of when it got extracted
---------------------------------------------------------------------------------------------}
getFPS :: [(Double, DynamicImage)] -> Int
getFPS frames = div (ceiling $ lastTimestamp - firstTimestamp) frameCount :: Int
where firstTimestamp = fst $ head frames
lastTimestamp = fst $ last frames
frameCount = length frames