Recherche avancée

Médias (91)

Autres articles (34)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6793)

  • ffmpeg : how to extract keyframes from certain time ranges into filenames with timestamps from the original video ?

    18 mai 2023, par Display Name

    I have as input specific timestamps I'm interested in, and I wish to extract keyframes closest to them. I thus use skip_frame nokey and a select='between(t,...,...)+between(t,...,...)+...' where I add a few seconds around each time I'm interested in (enough so that at least one keyframe will fall in that range based on the input video I have, and then can manually delete if more than one came out in a given time range in my sequence). Chaining the between()s lets me use a single command to extract all of these images in order to avoid seeking from the beginning of the video for each image, were I to use separate command per image. So this part works fine.

    


    The problem is that I want the output image filenames to correspond to the timestamps, in seconds (or some decimal fraction of seconds like tenths or milliseconds) of the extracted frames with respect to the INPUT video. With older versions of ffmpeg, I used to be able for example to get output filenames being times in terms of tenths of a second with -vsync 0 -r 10 -frame_pts true %05d.webp but with recent versions, that results in the error One of -r/-fpsmax was specified together a non-CFR -vsync/-fps_mode. This is contradictory. Replacing the deprecated -vsync with -fps_mode and one of the CFR values results in ffmpeg DUPLICATING frames to fulfill the specified -r value which results in a huge number of output images. The way I am able to get just the set of keyframes I want and no duplications is to drop the -r and use -fps_mode passthrough, but then I lose the naming of the output files by their time in the original video. Searching around here and elsewhere on the web, I have tried things like setting settb=...,setpts=... and -copyts but in the end couldn't get it to work.

    


    As a complete example, the command
ffmpeg -skip_frame nokey -i "input.mp4" -vf "select='between(t,15,25)+between(t,40,50)+between(t,95,105)+between(t,120,130)+between(t,190,200)',scale='min(480,iw)':-2:flags=lanczos" -fps_mode passthrough -copyts -c:v libwebp -quality 75 -preset photo -frame_pts true %05d.webp gives me the right set of output images, but not the filenames that would make it easy for me to quickly manually find frames corresponding to specific times in the original video.

    


  • ffmpeg extract keyframes from certain time ranges to filenames with timestamps from original video

    17 mai 2023, par Display Name

    I have as input specific timestamps I'm interested in, and I wish to extract keyframes closest to them. I thus use skip_frame nokey and a select='between(t,...,...)+between(t,...,...)+...' where I add a few seconds around each time I'm interested in (enough so that at least one keyframe will fall in that range based on the input video I have, and then can manually delete if more than one came out in a given time range in my sequence). Chaining the between()s lets me use a single command to extract all of these images in order to avoid seeking from the beginning of the video for each image, were I to use separate command per image. So this part works fine.

    


    The problem is that I want the output image filenames to correspond to the timestamps, in seconds (or some decimal fraction of seconds like tenths or milliseconds) of the extracted frames with respect to the INPUT video. With older versions of ffmpeg, I used to be able for example to get output filenames being times in terms of tenths of a second with -vsync 0 -r 10 -frame_pts true %05d.webp but with recent versions, that results in the error One of -r/-fpsmax was specified together a non-CFR -vsync/-fps_mode. This is contradictory. Replacing the deprecated -vsync with -fps_mode and one of the CFR values results in ffmpeg DUPLICATING frames to fulfill the specified -r value which results in a huge number of output images. The way I am able to get just the set of keyframes I want and no duplications is to drop the -r and use -fps_mode passthrough, but then I lose the naming of the output files by their time in the original video. Searching around here and elsewhere on the web, I have tried things like setting settb=...,setpts=... and -copyts but in the end couldn't get it to work.

    


    As a complete example, the command
ffmpeg -skip_frame nokey -i "input.mp4" -vf "select='between(t,15,25)+between(t,40,50)+between(t,95,105)+between(t,120,130)+between(t,190,200)',scale='min(480,iw)':-2:flags=lanczos" -fps_mode passthrough -copyts -c:v libwebp -quality 75 -preset photo -frame_pts true %05d.webp gives me the right set of output images, but not the filenames that would make it easy for me to quickly manually find frames corresponding to specific times in the original video.

    


  • Why Yuv data readed from ffmpeg is different from original input yuv ?

    13 mai 2021, par sianyi Huang

    I use ffmpeg to make HDR test video, my approach is write a image, converting the image to yuv420p and then use ffmpeg to make the HDR test video.

    



    But I found the yuv data readed from mp4 is different from the original input..
I was stucked in here for a while, does anyone know how to read the correct yuv data from mp4 ?

    



    #ffmpeg encode command
ffmpeg_encode_mp4 = \
"ffmpeg -y -s 100*100 -pix_fmt yuv420p -threads 4 -r 1 -stream_loop -1 -f rawvideo -i write_yuv.yuv -vf \
scale=out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10le \
-c:v libx265 -tag:v hvc1 -t 10 -pix_fmt yuv420p10le -preset medium -x265-params \
crf=12:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=\"G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)\":max-cll=\"1000,400\" \
-an test.mp4"

#ffmpeg read yuv from mp4 command
ffmpeg_extract_yuv = "ffmpeg -i test.mp4 -vframes 1 -c:v rawvideo -pix_fmt yuv420p read_yuv.yuv"


#make 100*100 yuv raw
w, h = 100, 100
test_gray = 255
test = np.full((100, 100, 3), test_gray, dtype=np.uint8)
yuv_cv = cv.cvtColor(test, cv.COLOR_RGB2YUV_I420)
yuv_cv.tofile("write_yuv.yuv")

#encode yuv raw to mp4 with HDR metadata
print(ffmpeg_encode_mp4)
result = subprocess.check_output(ffmpeg_encode_mp4, shell = True)
print(result)
sleep(0.5)

#extract yuv from mp4
kill_existing_file("read_yuv.yuv")
print(ffmpeg_extract_yuv)
result = subprocess.check_output(ffmpeg_extract_yuv, shell = True)
print(result)
sleep(0.5)

write_yuv = np.fromfile("write_yuv.yuv",dtype='uint8')
read_yuv = np.fromfile("read_yuv.yuv",dtype='uint8')

print("input gray:", test_gray)
print("write_yuv", write_yuv[:10])
print("read_yuv", read_yuv[:10])

reader = imageio.get_reader("test.mp4")
img = reader.get_data(0)
print("imgeio read:", img[50, 50])

'''
ouput result:
input gray: 255
write_yuv [235 235 235 235 235 235 235 235 235 235]
read_yuv [234 235 234 235 234 235 234 235 234 235]
imgeio read: [253 253 253]
'''


    



    I have no idea how to validate the video I made is corret
Any feedback will be very appreciated !