Recherche avancée

Médias (91)

Autres articles (54)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10864)

  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    


    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    


    As a minimal example, these are my commands :

    


    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - 

ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4

# same for video2.mp4

ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "
    [0:v]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];  // blur the video
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video
    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location
    [1:v]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2  // concatenate the two clips" 
-c:v libx264 -r 30 out.mp4


    


    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    


    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    


    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "
    [0:v]vidstabtransform=input=transform1.trf[v0stab]
    [v0stab]split=2[v0blur][v0scale];
    [v0blur]gblur=sigma=50[v0blur];
    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];
    [v0blur][v0scale]overlay=x=100:y=200[v0];
    [1:v]vidstabtransform=input=transform2.trf[v1stab]
    [v1stab]split=2[v1blur][v1scale];
    [v1blur]gblur=sigma=50[v1blur];
    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];
    [v1blur][v1scale]overlay=x=150:y=150[v1];
    [v0][v1]concat=n=2"
-c:v libx264 -r 30 out.mp4


    


  • Using ffmpeg to generate dash manifest and it cannot be played by dash.js

    18 mars 2019, par Punkhead

    I’m using ffmpeg to encode incoming stream via rtmp protocol, the code as following :

    ffmpeg -re -i rtmp://localhost:1935${StreamPath} -use_timeline 1 /
    -use_template 1 -window_size 10 -min_seg_duration 5000 -f dash out.mpd

    The manifest looks like this :

    <?xml version="1.0" encoding="utf-8"?>
    <mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediapresentationduration="PT1M36.4S" minbuffertime="PT8.3S">
       <programinformation>
       </programinformation>
       <period start="PT0.0S">
           <adaptationset contenttype="video" segmentalignment="true" bitstreamswitching="true" framerate="30/1">
           <representation mimetype="video/mp4" codecs="avc1.640028" width="1920" height="1080" framerate="30/1">
               <segmenttemplate timescale="15360" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="4">
                   <segmenttimeline>
                       <s t="384000" d="128000"></s>
                       <s d="71680"></s>
                       <s d="128000" r="4"></s>
                       <s d="56832"></s>
                       <s d="128000"></s>
                       <s d="72704"></s>
                   </segmenttimeline>
               </segmenttemplate>
           </representation>
       </adaptationset>
       <adaptationset contenttype="audio" segmentalignment="true" bitstreamswitching="true">
           <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="128000" audiosamplingrate="44100">
               <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
               <segmenttemplate timescale="44100" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="4">
                   <segmenttimeline>
                       <s t="1099755" d="367616"></s>
                       <s d="205824"></s>
                       <s d="367616" r="4"></s>
                       <s d="162816"></s>
                       <s d="367616"></s>
                       <s d="207872"></s>
                   </segmenttimeline>
               </segmenttemplate>
           </representation>
       </adaptationset>
    </period>
    </mpd>

    When I try to play it on dash.js player, a error occured :

    [112] Parsing complete: ( xml2json: 3.50ms, objectiron: 1.76ms, total: 0.00526s) Debug.js:127
    [116] SegmentTimeline detected using calculated Live Edge Time Debug.js:127
    [118] MediaSource attached to element.  Waiting on open... Debug.js:127
    [119] Manifest has been refreshed at Tue Jan 02 2018 01:57:35 GMT+0800 [1514829455.1] Debug.js:127
    [155] MediaSource is open! Debug.js:127
    [156] Duration successfully set to: 96.4 Debug.js:127
    [157] Added 0 inline events Debug.js:127
    [158] video codec: video/mp4;codecs="avc1.640028" Stream.js:225
    Uncaught TypeError: Cannot read property 'type' of null
       at z (Stream.js:225)
       at C (Stream.js:285)
       at D (Stream.js:373)
       at E (Stream.js:398)
       at Object.d [as activate] (Stream.js:107)
       at y (StreamController.js:363)
       at MediaSource.c (StreamController.js:342)

    then it fails to playback...

    Is it because I didn’t set the parameters right on ffmpeg or this is a bug in dash.js ?

    I really stuck here !

  • Haskell - Converting multiple images into a video file - ffmpeg-lights' frameWriter-function fails

    26 octobre 2017, par oRole

    Situation
    Currently I am working on an application for image-processing that uses ffmpeg-light to fetch all the frames of a given video-file so that the program afterwards can apply grayscaling, as well as edge detection alogrithms to each of the frames.

    With the help of friendly stackoverflowers I was able to set up a method capable of converting several images into one video file using ffmpeg-lights’ frameWriter function.

    Problem
    The application runs fine to the moment it hits the frameWriterfunction and I don’t really know why as there are no errors or exception-messages thrown. (OS : Win 10 64bit)

    What did I try ?
    I tried..

    - different versions of ffmpeg (from 3.2 to 3.4).

    - ffmpeg.exe using the command line to test if there are any codecs missing, but any conversion I tried worked.

    - different EncodingParams-combinations : like.. EncodingParams width height fps (Nothing) (Nothing) "medium"

    Question
    Unfortunately, none of above worked and the web lacks on information to that specific case. Maybe I missed something essential (like ghc flags or something) or made a bigger mistake within my code. That is why I have to ask you : Do you have any suggestions/advice for me ?

    Haskell Packages

    - ffmpeg-light-0.12.0

    - JuicyPixels-3.2.8.3

    Code

    {--------------------------------------------------------------------------------------------
    Applies "juicyToFFmpeg'" and "getFPS" to a list of images and saves the output-video
    to a user defined location.
    ---------------------------------------------------------------------------------------------}    
    saveVideo :: String -> [Image PixelYA8] -> Int -> IO ()
    saveVideo path imgs fps = do
            -- program stops after hitting next line --
            frame &lt;- frameWriter ep path
            ------------------------------------------------
            Prelude.mapM_ (frame . Just) ffmpegImgs
            frame Nothing
            where ep = EncodingParams width height fps (Just avCodecIdMpeg4) (Just avPixFmtGray8a) "medium"
                  width      = toCInt $ imageWidth  $ head imgs
                  height     = toCInt $ imageHeight $ head imgs
                  ffmpegImgs = juicyToFFmpeg' imgs
                  toCInt x   = fromIntegral x :: CInt

    {--------------------------------------------------------------------------------------------
    Converts a single image from JuicyPixel-format to ffmpeg-light-format.
    ---------------------------------------------------------------------------------------------}      
    juicyToFFmpeg :: Image PixelYA8 -> (AVPixelFormat, V2 CInt, Vector CUChar)
    juicyToFFmpeg img = (avPixFmtGray8a, V2 (toCInt width) (toCInt height), ffmpegData)
                     where toCInt   x   = fromIntegral x :: CInt
                           toCUChar x   = fromIntegral x :: CUChar
                           width        = imageWidth img
                           height       = imageHeight img
                           ffmpegData   = VS.map toCUChar (imageData img)

    {--------------------------------------------------------------------------------------------
    Converts a list of images from JuicyPixel-format to ffmpeg-light-format.
    ---------------------------------------------------------------------------------------------}                        
    juicyToFFmpeg' :: [Image PixelYA8] -> [(AVPixelFormat, V2 CInt, Vector CUChar)]
    juicyToFFmpeg' imgs = Prelude.foldr (\i acc -> acc++[juicyToFFmpeg i]) [] imgs

    {--------------------------------------------------------------------------------------------
    Simply calculates the FPS for image-to-video conversion.
    -> frame :: (Double, DynamicImage) where Double is a timestamp of when it got extracted
    ---------------------------------------------------------------------------------------------}
    getFPS :: [(Double, DynamicImage)] -> Int
    getFPS frames = div (ceiling $ lastTimestamp - firstTimestamp) frameCount :: Int
                 where firstTimestamp = fst $ head frames
                       lastTimestamp  = fst $ last frames
                       frameCount     = length frames