Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (111)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (10045)

  • Higher quality of JPEG frames extracted with FFMPEG from MP4 video [on hold]

    5 juin 2019, par 2006pmach

    I got a 4k mp4 video file and my goal is to extract the individual frames. Unfortunately, the video is quite big 10GB and storing all the frames lossless (e.g. using png, results in 12MB per frame) is not an option. Therefore, I tired to save directly JPEG images. For me quality is more important than a small file size around 1MB would be good.
    To do so I used FFMPEG as follows :

    ffmpeg -ss 00:04:52 -i video.MP4 -qscale:v $quality -frames:v 1 output-$quality.jpg

    I tried the full range from 2 to 31 for $quality and obtained the blue curve in the plot (PSNR vs. File size)

    Additionally, I extracted the frame and saved it as PNG and used convert from ImageMagick to compress the PNG file as follows :

    convert -quality $quality% frame.png output-$quality.jpg

    Again I tried the full range for $quality from 10 to 100 and obtained the orange line in the plot (The highest quality is 50dB but uses 6MB, so I only show the results up to 2MB).

    Now, my questions are as follows. Why is the quality of ffmpeg that much worse than when using ImageMagick ? Is it possible to increase the quality of the JPEG frames using ffmpeg directly or do I need to go via the PNG and then to JPEG. The later method is somehow suboptimal because it requires storing the png and will be much slower. Any suggestions ? My guess is that ffmpeg trades quality vs. speed...

    enter image description here

  • Precisely trim a video with ffmpeg

    14 août 2021, par user16665676

    I want to precisely trim a video using ffmpeg (between keyframes), while keeping the original quality and audio in sync. I understand that using -c copy will do this and re-encoding is necessary. Rather than re-encoding the entire trim duration, I have a solution :

    


    assuming the following :

    


      

    • trim start is 00:00:10
    • 


    • trim end is 00:00:20
    • 


    • closest keyframe to trim start is 00:00:10.5
    • 


    


    trim from the specified start to the first keyframe (keyframes are provided by ffprobe) :

    


    ffmpeg -ss 00:00:10 -i input.mp4 -t 0.5 -enc_timebase -1 -vsync 2 1.mp4

    


    -enc_timebase -1 -vsync 2 are necessary otherwise 2.mp4 has a much slower playback speed (though audio remains the same and subsequently finishes much sooner than the video).

    


    trim, using -c copy, from the keyframe above to the end time :

    


    ffmpeg -ss 00:00:10.5 -t 00:00:09.5 input.mp4 -c copy 2.mp4

    


    finally, concatenate both of the videos :

    


    ffmpeg -f concat -i input.txt -c copy output.mp4

    


    This process works great and is quick as I'm only re-encoding up to the first keyframe (usually less than a second). However, when concatenating both the trimmed (1.mp4) and copied (2.mp4) there is a noticeable jitter (in both audio and video) on the concatenated output where the first keyframe starts.

    


    After extracting each individual frame from both videos, its evident that the keyframe is being included twice, once at the end of the trimmed (1.mp4) and once at the start of the copied (2.mp4). Also, if I don't include -enc_timebase -1 -vsync 2 and the output is played back in "slow motion" on the copied (2.mp4) part of the output, the first keyframe is on screen for much longer than the others. I believe this to be the cause.

    


    Is there a better way to do this process ? Am I messing up any timings/syncing ? I'm not entirely sure if the -enc_timebase -1 -vsync 2 are the right options to use.

    


  • How to use ffmpeg for live streaming fragmented mp4 ?

    2 mai 2018, par Cross_

    Following a variety of stackoverflow suggestions I am able to create a fragmented MP4 file, then slice it into the header part (FTYP & MOOV) and various segment files (each containing MOOF & MDAT). Using Media Source Extensions I download and add the individual segments - that’s all working great.

    Now I would like to create a live streaming webcam with that same approach. I had hoped that I could just send the MOOV box to each new client plus the currently streaming segment. This however is rejected as invalid data in the browser. I have to start with the first segment and they have to be appended in order. That’s not helpful for a live streaming scenario where you don’t want to see the whole stream from the start. Is there any way to alter the files so that the segments are truly independent and you can start playback from the middle ?

    For reference, this is how I am setting up the stream on the OS X server :

    $ ffmpeg -y -hide_banner -f avfoundation -r 30 -s 1280x720
    -pixel_format yuyv422 -i default -an -c:v libx264 -profile:v main -level 3.2 -preset medium -tune zerolatency -flags +cgop+low_delay -movflags empty_moov+omit_tfhd_offset+frag_keyframe+default_base_moof+isml
    -pix_fmt yuv420p | split_into_segments.py

    Playback is done with a slightly modified version of this sample code :
    https://github.com/bitmovin/mse-demo/blob/master/index.html