Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (90)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (22609)

  • Why Yuv data readed from ffmpeg is different from original input yuv ?

    13 mai 2021, par sianyi Huang

    I use ffmpeg to make HDR test video, my approach is write a image, converting the image to yuv420p and then use ffmpeg to make the HDR test video.

    



    But I found the yuv data readed from mp4 is different from the original input..
I was stucked in here for a while, does anyone know how to read the correct yuv data from mp4 ?

    



    #ffmpeg encode command
ffmpeg_encode_mp4 = \
"ffmpeg -y -s 100*100 -pix_fmt yuv420p -threads 4 -r 1 -stream_loop -1 -f rawvideo -i write_yuv.yuv -vf \
scale=out_h_chr_pos=0:out_v_chr_pos=0,format=yuv420p10le \
-c:v libx265 -tag:v hvc1 -t 10 -pix_fmt yuv420p10le -preset medium -x265-params \
crf=12:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=\"G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)\":max-cll=\"1000,400\" \
-an test.mp4"

#ffmpeg read yuv from mp4 command
ffmpeg_extract_yuv = "ffmpeg -i test.mp4 -vframes 1 -c:v rawvideo -pix_fmt yuv420p read_yuv.yuv"


#make 100*100 yuv raw
w, h = 100, 100
test_gray = 255
test = np.full((100, 100, 3), test_gray, dtype=np.uint8)
yuv_cv = cv.cvtColor(test, cv.COLOR_RGB2YUV_I420)
yuv_cv.tofile("write_yuv.yuv")

#encode yuv raw to mp4 with HDR metadata
print(ffmpeg_encode_mp4)
result = subprocess.check_output(ffmpeg_encode_mp4, shell = True)
print(result)
sleep(0.5)

#extract yuv from mp4
kill_existing_file("read_yuv.yuv")
print(ffmpeg_extract_yuv)
result = subprocess.check_output(ffmpeg_extract_yuv, shell = True)
print(result)
sleep(0.5)

write_yuv = np.fromfile("write_yuv.yuv",dtype='uint8')
read_yuv = np.fromfile("read_yuv.yuv",dtype='uint8')

print("input gray:", test_gray)
print("write_yuv", write_yuv[:10])
print("read_yuv", read_yuv[:10])

reader = imageio.get_reader("test.mp4")
img = reader.get_data(0)
print("imgeio read:", img[50, 50])

'''
ouput result:
input gray: 255
write_yuv [235 235 235 235 235 235 235 235 235 235]
read_yuv [234 235 234 235 234 235 234 235 234 235]
imgeio read: [253 253 253]
'''


    



    I have no idea how to validate the video I made is corret
Any feedback will be very appreciated !

    


  • Cut .mkv Video using ffmpeg without changing original bitrate

    23 janvier 2021, par benito_h

    I accidently said an unappropriate swear word during an educational video (good start I know). So I would like to remove this section from the .mkv video. However I would like the video and audio bitrate and quality unchanged.

    


    First, I tried cutting the video slightly after the relevant time stamp without reencoding it, using for example

    


    ffmpeg -i input.mkv -ss 00:01:09.200 -c copy -t 4:11 output.mkv


    


    but this way the first couple of seconds seem to get lost.

    


    Is there a way to remove the relevant segment (01:08.800 to 01:09.200) while maintaining the same bitrate / quality for audio and video ? Since only formulas are shown, a slight out-of-sync wouldnt even matter.

    


  • ffmpeg extract keyframes from certain time ranges to filenames with timestamps from original video

    17 mai 2023, par Display Name

    I have as input specific timestamps I'm interested in, and I wish to extract keyframes closest to them. I thus use skip_frame nokey and a select='between(t,...,...)+between(t,...,...)+...' where I add a few seconds around each time I'm interested in (enough so that at least one keyframe will fall in that range based on the input video I have, and then can manually delete if more than one came out in a given time range in my sequence). Chaining the between()s lets me use a single command to extract all of these images in order to avoid seeking from the beginning of the video for each image, were I to use separate command per image. So this part works fine.

    


    The problem is that I want the output image filenames to correspond to the timestamps, in seconds (or some decimal fraction of seconds like tenths or milliseconds) of the extracted frames with respect to the INPUT video. With older versions of ffmpeg, I used to be able for example to get output filenames being times in terms of tenths of a second with -vsync 0 -r 10 -frame_pts true %05d.webp but with recent versions, that results in the error One of -r/-fpsmax was specified together a non-CFR -vsync/-fps_mode. This is contradictory. Replacing the deprecated -vsync with -fps_mode and one of the CFR values results in ffmpeg DUPLICATING frames to fulfill the specified -r value which results in a huge number of output images. The way I am able to get just the set of keyframes I want and no duplications is to drop the -r and use -fps_mode passthrough, but then I lose the naming of the output files by their time in the original video. Searching around here and elsewhere on the web, I have tried things like setting settb=...,setpts=... and -copyts but in the end couldn't get it to work.

    


    As a complete example, the command
ffmpeg -skip_frame nokey -i "input.mp4" -vf "select='between(t,15,25)+between(t,40,50)+between(t,95,105)+between(t,120,130)+between(t,190,200)',scale='min(480,iw)':-2:flags=lanczos" -fps_mode passthrough -copyts -c:v libwebp -quality 75 -preset photo -frame_pts true %05d.webp gives me the right set of output images, but not the filenames that would make it easy for me to quickly manually find frames corresponding to specific times in the original video.