Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15805)

  • Best logical formula to determine perceptual / "experienced" quality of a video, given resolution / fps and bitrate ?

    20 mars 2023, par JamesK

    I am looking for a formula that can provide me with a relatively decent approximation of a Video's playback quality that can be calculated based off of four metrics : width, height, fps, and bitrate (bits/sec). Alternatively, I can also use FFMPEG or similar tools to calculate a Video's playback quality, if any of those tools provide something like what I am looking for here.

    


    An example of what a Video might look like in my problem is as follows :

    


    interface Video {
  /** The width of the Video (in pixels). */
  width: number
  /** The height of the Video (in pixels). */
  height: number
  /** The frame rate of the Video (frames per second). */
  fps: number
  /** The bitrate of the video, in bits per second (e.g. 5_000_000 = 5Mbit/sec) */
  bitrate: number
}


    


    I came up with the following function to compute the average amount of bits available for any given pixel per second :

    


    const computeVideoQualityScalar = (video: Video): number => {
  // The amount of pixels pushed to the display, per frame.
  const pixelsPerFrame = video.width * video.height
  
  // The amount of pixels pushed to the display, per second.
  const pixelsPerSecond = pixelsPerFrame * video.fps
  
  // The average amount of bits used by each pixel, each second,
  // to convey all data relevant to that pixel (e.g. color data, etc)
  const bitsPerPixelPerSecond = video.bitrate / pixelsPerSecond
  
  return bitsPerPixelPerSecond
}


    


    While my formula does do a good job of providing a more-or-less "standardized" assessment of mathematical quality for any given video, it falls short when I try to use it to compare videos of different resolutions to one another. For example, a 1080p60fps video with a bitrate of 10Mbit/sec has a greater visual fidelity (at least, subjectively speaking, to my eyes) than a 720p30fps video with a bitrate of 9Mbit/sec, but my formula would score the 720p30fps video significantly higher than the 1080p60fps video because the 720p video has more bits available per pixel per second than the 1080p video.

    


    I am struggling to come up with ideas as to how to either come up with a different way to calculate the "subjective video quality" for a given video, or extend upon my existing idea here.

    


  • ffmpeg padding doesn't scale when changing resolution

    5 mars 2023, par Martin

    I have a ffmpeg command which takes a bunch of audio files, 3 image files, and renders a video with them.

    


    Image Input Dimmensions :

    


      

    • 1_front.jpg 600w x 593h
    • 


    • 2_back.jpg 600w x 466h
    • 


    • 3_cd.jpg 600w x 598h
    • 


    


    The video has a resolution of w=600 h=593, which is the resolution of the first img.

    


    Here's the full command

    


    ffmpeg
 -r 2 -i "E:\filepath\10. Deejay Punk-Roc - Knock 'em All The Way Out.aiff"
 -r 2 -i "E:\filepath\11. Deejay Punk-Roc - Spring Break.aiff" -r 2 -i "E:\filepath\12. Deejay Punk-Roc - Fat Gold Chain.aiff" 
 -r 2 -i "E:\filepath\1_front.jpg" -r 2 -i "E:\filepath\2_back.jpg" -r 2 -i "E:\filepath\3_cd.jpg" 
 
 -filter_complex 
 "
 [0:a][1:a][2:a]concat=n=3:v=0:a=1[a]; 
 
 [3:v]scale=w=600:h=593,setsar=1,loop=580.03:580.03[v3]; 

 [4:v]pad=600:593:0:63:color=pink,setsar=1,loop=580.03:580.03[v4];
 
 [5:v]scale=w=600:h=593,setsar=1,loop=580.03:580.03[v5];
 
 [v3][v4][v5]concat=n=3:v=1:a=0,pad=ceil(iw/2)*2:ceil(ih/2)*2[v]"
 
  -map "[v]" -map "[a]" -c:a pcm_s32le -c:v libx264 -bufsize 3M -crf 18 -pix_fmt yuv420p -tune stillimage -t 870.04 
  "E:\filepath\vidOutPutCorrect.mkv"



    


    For filter_complex this second part will add padding to the second image so that it does not get stretched or cropped.

    


    [4:v]pad=600:593:0:63:color=pink,setsar=1,loop=580.03:580.03[v4];


    


    Specifically this part

    


    pad=600:593:0:63:color=pink


    


    Which I understand means w=600 and h=593, but i dont know that the last part 0:63 means.

    


    You can see the output has the pink padding correctly
enter image description here

    


    but i want to render the video with a resolution of w=1920 h=1898 instead of w=600 h=593.

    


    So i update the command to have this new resolution :

    


    ffmpeg -r 2 -i "E:\filepath\10. Deejay Punk-Roc - Knock 'em All The Way Out.aiff" -r 2 -i "E:\filepath\11. Deejay Punk-Roc - Spring Break.aiff" -r 2 -i "E:\filepath\12. Deejay Punk-Roc - Fat Gold Chain.aiff" -r 2 -i "E:\filepath\1_front.jpg" -r 2 -i "E:\filepath\2_back.jpg" -r 2 -i "E:\filepath\3_cd.jpg" 

-filter_complex "
[0:a][1:a][2:a]concat=n=3:v=0:a=1[a];

[3:v]scale=w=1920:h=1898,setsar=1,loop=580.03:580.03[v3];

[4:v]pad=1920:1898:0:63:color=pink,setsar=1,loop=580.03:580.03[v4];

[5:v]scale=w=1920:h=1898,setsar=1,loop=580.03:580.03[v5];

[v3][v4][v5]concat=n=3:v=1:a=0,pad=ceil(iw/2)*2:ceil(ih/2)*2[v]"

 -map "[v]" -map "[a]" -c:a pcm_s32le -c:v libx264 -bufsize 3M -crf 18 -pix_fmt yuv420p -tune stillimage -t 870.04 "E:\filepath\slidet.mkv"


    


    my video does in fact now have a resolution of 1920x1798 which is great, but the 2nd image padding portion has the image super small and in a corner

    


    enter image description here

    


    So this line works :

    


    pad=600:593:0:63:color=pink


    


    but with a different resolution is looks bad with the image in the top left corner

    


    pad=1920:1898:0:63:color=pink


    


    What do I need to change 0:63 to in order to have the image be centered ?

    


    Download files : http://www.mediafire.com/folder/e8ja1n8elszk1lu,dxw4vglrz7polyh,ojjx6kcqruksv5r,lah9rano4svj46o,q5jg0083vbj9y1p,d3pt8ydf3ulqm5m/shared

    


  • ffmpeg h264 to mp4 conversion from multiple files fails to preserve in-sequence resolution changes

    1er juillet 2023, par LB2

    This will be a long post, so I thank you in advance for your patience in digesting it.

    


    Context

    


    I have different sources that generate visual content that eventually need to be all composed into a single .mp4 file. The sources are :

    


      

    • H.264 video (encoded using CUDA NVENC).

        

      • This video can have in-sequence resolution change that is natively supported by H.264 codec.
      • 


      • I.e. stream may start as HxW resolution and mid-stream change to WxH. This behavior happens because it comes from a camera device that can be rotated and flipped between portrait and landscape (e.g. think of a phone camera recording video and phone being flipped from one orientation to another, and video recording adjusting its encoding for proper video scaling and orientation).
      • 


      • When rotation occurs, most of the time H & W are just swaps, but may actually be entirely new values — e.g. in some cases 1024x768 will switch to 768x1024, but in other cases 1024x768 may become 460x640 (depends on source camera capabilities that I have no control over).
      • 


      


    • 


    • JPEGs. A series (a.k.a. batch) of still JPEGs.

        

      • The native resolution of JPEGs may or may not match the video resolution in the earlier bullet.
      • 


      • JPEGs can also reflect rotation of device and so some JPEGs in a sequence may start at HxW resolution and then from some arbitrary JPEG file can flip and become WxH. Similar to video, resolution dimensions are likely to be just a swap, but may become altogether different values.
      • 


      


    • 


    • There can be any number of batches and intermixes between video and still sources. E.g. V1 + S2 + S3 + V4 + V5 + V6 + S7 + ...
    • 


    • There can be any number of resolution changes between or within batches. e.g. V1 ;r1 + V1 ;r2 + S2 ;r1 + S2 ;r3 + V3 ;r2 + ... (where first subscript is batch sequence ; rX is resolution)
    • 


    


    Problem

    


    I'm attempting to do this conversion with ffmpeg and can't quite get it right. The problem is that I can't get output to respect source resolutions, and it just squishes all into a single output resolution.

    


    Example of squishing problem

    


    As already mentioned above, H.264 supports resolution changes in-sequence (mid-stream), and it should be possible to convert and concatenate all the content and have final output contain in-sequence resolution changes.

    


    Since MP4 is just a container, I'm assuming that MP4 files can do so as well ?

    


    Attempts so far

    


    The approach thus far has been to take each batch of content (i.e. .h264 video or a set of JPEGs), and individually convert to .mp4. Video is converted using -c copy to ensure it doesn't try to transcode, e.g. :

    


    ffmpeg -hide_banner -i videoX.h264 -c copy -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and JPEGs are converted using -f concat

    


    ffmpeg -hide_banner -f concat -safe 0 -i jpegsX.txt -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2' -r 30 -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and then all the intermediates concatenated together

    


    ffmpeg -hide_banner -f concat -safe 0 -i final.txt -pix_fmt yuv420p -c copy -vsync vfr -video_track_timescale 90000 -metadata title='yabadabadoo' -fflags +bitexact -flags:v +bitexact -flags:a +bitexact final.mp4


    


    This concatenates, but if resolution changes at some mid point, then that part of content comes up squished/stretched in final output.

    


    Use h.264 as intermediates

    


    All the intermediates are produced the same, except as .h264. All intermediate .h264 are cat'ed together like `cat intermediate1.h264 intermediate2.264 > final.h264.

    


    If final output is final.mp4, the output is incorrect and images are squished/stretched.

    


    If final.h264, then at least it seems to be respecting aspect ratios of input and managing to produce correctly looking output. However, examining with ffprobe it seems that it uses SAR weird ratios, where first frames are width=1440 height=3040 sample_aspect_ratio=1:1, but later SAR takes on values like width=176 height=340 sample_aspect_ratio=1545:176, which I suspect isn't right, since all original input was with "square pixels". I think the reason for it is that it was composed out of different sized JPEGs, and concat filter somehow caused ffmpeg to manipulate SAR "to get things fit".

    


    But at least it renders respectably, though hard to say with ffplay if player would actually see resolution change and resize accordingly .

    


    And, that's .h264 ; and I need final output to be .mp4.

    


    Use -vf filter

    


    I tried enforcing SAR using -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2,setsar=1:1' (scaling is to deal with odd dimension JPEGs), but it still produces frames with SAR like stated earlier.

    


    Other thoughts

    


    For now, while I haven't given up, I'm trying to avoid in my code examining each individual JEPG in a batch to see if there are differing sizes, and splitting batch so that each sub-batch is homogenous resolution-wise, and generating individual intermediate .h264 so that SAR remains sane, and keep fingers crossed that the final would work correctly. It'll be very slow, unfortunately.

    


    Question

    


    What's the right way to deal with all that using ffmpeg, and how to concatenate mulitple varying resolution sources into a final mp4 so that it respects resolution changes mid-stream ?