Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (39)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (7815)

  • ffmpeg h264 to mp4 conversion from multiple files fails to preserve in-sequence resolution changes

    1er juillet 2023, par LB2

    This will be a long post, so I thank you in advance for your patience in digesting it.

    


    Context

    


    I have different sources that generate visual content that eventually need to be all composed into a single .mp4 file. The sources are :

    


      

    • H.264 video (encoded using CUDA NVENC).

        

      • This video can have in-sequence resolution change that is natively supported by H.264 codec.
      • 


      • I.e. stream may start as HxW resolution and mid-stream change to WxH. This behavior happens because it comes from a camera device that can be rotated and flipped between portrait and landscape (e.g. think of a phone camera recording video and phone being flipped from one orientation to another, and video recording adjusting its encoding for proper video scaling and orientation).
      • 


      • When rotation occurs, most of the time H & W are just swaps, but may actually be entirely new values — e.g. in some cases 1024x768 will switch to 768x1024, but in other cases 1024x768 may become 460x640 (depends on source camera capabilities that I have no control over).
      • 


      


    • 


    • JPEGs. A series (a.k.a. batch) of still JPEGs.

        

      • The native resolution of JPEGs may or may not match the video resolution in the earlier bullet.
      • 


      • JPEGs can also reflect rotation of device and so some JPEGs in a sequence may start at HxW resolution and then from some arbitrary JPEG file can flip and become WxH. Similar to video, resolution dimensions are likely to be just a swap, but may become altogether different values.
      • 


      


    • 


    • There can be any number of batches and intermixes between video and still sources. E.g. V1 + S2 + S3 + V4 + V5 + V6 + S7 + ...
    • 


    • There can be any number of resolution changes between or within batches. e.g. V1 ;r1 + V1 ;r2 + S2 ;r1 + S2 ;r3 + V3 ;r2 + ... (where first subscript is batch sequence ; rX is resolution)
    • 


    


    Problem

    


    I'm attempting to do this conversion with ffmpeg and can't quite get it right. The problem is that I can't get output to respect source resolutions, and it just squishes all into a single output resolution.

    


    Example of squishing problem

    


    As already mentioned above, H.264 supports resolution changes in-sequence (mid-stream), and it should be possible to convert and concatenate all the content and have final output contain in-sequence resolution changes.

    


    Since MP4 is just a container, I'm assuming that MP4 files can do so as well ?

    


    Attempts so far

    


    The approach thus far has been to take each batch of content (i.e. .h264 video or a set of JPEGs), and individually convert to .mp4. Video is converted using -c copy to ensure it doesn't try to transcode, e.g. :

    


    ffmpeg -hide_banner -i videoX.h264 -c copy -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and JPEGs are converted using -f concat

    


    ffmpeg -hide_banner -f concat -safe 0 -i jpegsX.txt -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2' -r 30 -vsync vfr -video_track_timescale 90000 intermediateX.mp4


    


    ... and then all the intermediates concatenated together

    


    ffmpeg -hide_banner -f concat -safe 0 -i final.txt -pix_fmt yuv420p -c copy -vsync vfr -video_track_timescale 90000 -metadata title='yabadabadoo' -fflags +bitexact -flags:v +bitexact -flags:a +bitexact final.mp4


    


    This concatenates, but if resolution changes at some mid point, then that part of content comes up squished/stretched in final output.

    


    Use h.264 as intermediates

    


    All the intermediates are produced the same, except as .h264. All intermediate .h264 are cat'ed together like `cat intermediate1.h264 intermediate2.264 > final.h264.

    


    If final output is final.mp4, the output is incorrect and images are squished/stretched.

    


    If final.h264, then at least it seems to be respecting aspect ratios of input and managing to produce correctly looking output. However, examining with ffprobe it seems that it uses SAR weird ratios, where first frames are width=1440 height=3040 sample_aspect_ratio=1:1, but later SAR takes on values like width=176 height=340 sample_aspect_ratio=1545:176, which I suspect isn't right, since all original input was with "square pixels". I think the reason for it is that it was composed out of different sized JPEGs, and concat filter somehow caused ffmpeg to manipulate SAR "to get things fit".

    


    But at least it renders respectably, though hard to say with ffplay if player would actually see resolution change and resize accordingly .

    


    And, that's .h264 ; and I need final output to be .mp4.

    


    Use -vf filter

    


    I tried enforcing SAR using -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2,setsar=1:1' (scaling is to deal with odd dimension JPEGs), but it still produces frames with SAR like stated earlier.

    


    Other thoughts

    


    For now, while I haven't given up, I'm trying to avoid in my code examining each individual JEPG in a batch to see if there are differing sizes, and splitting batch so that each sub-batch is homogenous resolution-wise, and generating individual intermediate .h264 so that SAR remains sane, and keep fingers crossed that the final would work correctly. It'll be very slow, unfortunately.

    


    Question

    


    What's the right way to deal with all that using ffmpeg, and how to concatenate mulitple varying resolution sources into a final mp4 so that it respects resolution changes mid-stream ?

    


  • extracting single frame from MediaElemet or FFmpegInterop

    13 janvier 2016, par Jakub Wisniewski

    I am writing app (Windows Phone 8.1 Store App) that allows user to connect to IP Camera. I am using FFmpeg Interop library for ffmpeg which allows me to play eg. rtsp streams in media element. I need now a way to somehow extract a single frame from stream or from media element.

    I have tested other application wchih allows connecting to IP cameras - IP Centcom, and they have working snapshots only for mjpeg streams as far as I now (they were not working for rtsp). Becouse of that I belive that it is impossible or at very least very hard to export frame from media element.

    I have different question - if anyone has ever used FFmpeg Interop and would like to help/explain me how could I modify/extend FFmpegInteropMSS to add method called ’GetThumbnailForStream’ that would work similary to ’GetMediaStreamSource’ but would return single frame (bitmap or jpg) instead of MediaStreamSource ?

    Every help would be appreciated

    EDIT :

    I have found something ;

    in MediaSampleProvider in method WriteAVPacketToStream (line 123) there is line

    auto aBuffer = ref new Platform::Array(avPacket->data, avPacket->size);

    and I belive that this is the place that stores single frame data that is needed to convert into bitmap - now since I do not know c++ too much I have a question : how can I convert it into a form that I could return via public method ?

    When returning :

    Platform::Array^

    I get

    ’FFmpegInterop::MediaSampleProvider’ : a non-value type cannot have any public data members

    EDIT2 :

    Ok I am doing approprate projection to byte according to this microsoft information, now I need to check if this is correct data.

  • convert camera-stream to a MJPEG + RTSP stream

    15 juillet 2017, par manman

    I am trying to convert a video-stream from a ubiquiti camera to a rtsp stream decoded in mjpeg. I tried it with ffserver but it didn’t work out. Straight away : A solution in windows would be more suitable in this case, if anyone knows a good windows-software to do such things, please tell me.

    Now to my setup :
    For testing purpose, i used a Ubuntu-Desktop VM and installed the ffmpeg package including ffserver via the command apt-get install ffmpeg.
    Afterwards i used the preconfigured Feed (feed1.ffm) to send the data to ffserver with ffmpeg :

    ffmpeg -i rtsp://[Camera-Url] -strict -2 http://localhost:8090/feed1.ffm

    and configured a new Stream in ffserver.conf

    <stream>
     Format rtsp
     Feed feed1.ffm
     VideoCodec mjpeg
     VideoFrameRate 5
     VideoIntraOnly
     VideoSize 352x240
     NoAudio
    </stream>

    I now tested the stream with vlc-player with following urls but none of them worked :

    rtsp://127.0.0.1/jpgvideo.sav
    rtsp://127.0.0.1:5454/jpgvideo.sav
    rtsp://127.0.0.1:5454/jpgvideo.sav.rtsp
    rtsp://127.0.0.1:5454/jpgvideo.rtsp

    Does someone knows why ? What I am missing here ?


    Note that another non-rtsp stream is just working fine :

    <stream>
     Feed feed1.ffm
     Format mpjpeg
     VideoFrameRate 5
     VideoIntraOnly
     VideoSize 352x240
     NoAudio
    </stream>

    Url :

    http://127.0.0.1:8090/test.mjpg

    In case anybody wonders :
    I am trying to get a video-stream on a SPA525G2 Cisco-IP-Phone. This is only supported to Cisco-Cameras, but according to this link it should also be possible if the stream is cisco-camera-like. (rtsp + mjpeg, 5fps)