Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (8402)

  • mov : Support mdcv and clli boxes for mastering display an color light level

    4 octobre 2017, par Vittorio Giovara
    mov : Support mdcv and clli boxes for mastering display an color light level
    

    Signed-off-by : Vittorio Giovara <vittorio.giovara@gmail.com>

    • [DH] libavformat/mov.c
  • Haskell - Turning multiple image-files into one video-file using the ffmpeg-light package

    25 avril 2021, par oRole

    Background
    &#xA;I wrote an application for image-processing which uses the ffmpeg-light package to fetch all the frames of a given video-file so that the program afterwards is able to apply grayscaling, as well as edge detection alogrithms to each of the frames.

    &#xA;

    Now I'm trying to put all of the frames back into a single video-file.

    &#xA;

    Used Libs
    &#xA;ffmpeg-light-0.12.0
    &#xA;JuicyPixels-3.2.8.3
    &#xA;...

    &#xA;

    What have I tried ?
    &#xA;I have to be honest, I didn't really try anything because I'm kinda clueless where and how to start. I saw that there is a package called Command which allows running processes/commands using the command line. With that I could use ffmpeg (not ffmpeg-light) to create a video out of image-files which I would have to save to the hard drive first but that would be kinda hacky.

    &#xA;Within the documentation of ffmpeg-light on hackage (ffmpeg-light docu) I found the frameWriter function which sounds promising.

    &#xA;

    frameWriter :: EncodingParams -> FilePath -> IO (Maybe (AVPixelFormat, V2 CInt, Vector CUChar) -> IO ()) &#xA;

    &#xA;

    I guess FilePath would be the location where the video file gets stored but I can't really imagine how to apply the frames as EncodingParams to this function.

    &#xA;

    Others
    &#xA;I can access :

    &#xA;

      &#xA;
    • r, g, b, a as well asy. a values
    • &#xA;

    • image width / height / format
    • &#xA;

    &#xA;

    Question
    &#xA;Is there a way to achieve this using the ffmpeg-light package ?

    &#xA;

    As the ffmpeg-light package lacks of documentation when it comes to conversion from images to video, I really would appreciate your help. (I do not expect a fully working solution.)

    &#xA;

    Code
    &#xA;The code that reads the frames :

    &#xA;

    -- Gets and returns all frames that a given video contains&#xA;getAllFrames :: String -> IO [(Double, DynamicImage)]&#xA;getAllFrames vidPath = do &#xA;  result &lt;- try (imageReaderTime $ File vidPath) :: IO (Either SomeException (IO (Maybe (Image PixelRGB8, Double)), IO()))&#xA;  case result of &#xA;    Left ex -> do &#xA;                 printStatus "Invalid video-path or invalid video-format detected." "Video" &#xA;                 return []&#xA;    Right (getFrame, _) -> addNextFrame getFrame [] &#xA;&#xA;-- Adds up all available frames to a video.&#xA;addNextFrame :: IO (Maybe (Image PixelRGB8, Double)) -> [(Double, DynamicImage)] -> IO [(Double, DynamicImage)]&#xA;addNextFrame getFrame frames = do&#xA;  frame &lt;- getFrame&#xA;  case frame of &#xA;    Nothing -> do &#xA;                 printStatus "No more frames found." "Video"&#xA;                 return frames&#xA;    _       -> do                             &#xA;                 newFrameData &lt;- fmap ImageRGB8 . swap . fromJust &lt;$> getFrame &#xA;                 printStatus ("Frame: " &#x2B;&#x2B; (show $ length frames) &#x2B;&#x2B; " added.") "Video"&#xA;                 addNextFrame getFrame (frames &#x2B;&#x2B; [newFrameData]) &#xA;

    &#xA;

    Where I am stuck / The code that should convert images to video :

    &#xA;

    -- Converts from several images to video&#xA;juicyToFFmpeg :: [Image PixelYA8] -> ?&#xA;juicyToFFmpeg imgs = undefined&#xA;

    &#xA;

  • tf.contrib.signal.stft returns an empty matrix

    9 décembre 2017, par matt-pielat

    This is the piece of code I run :

    import tensorflow as tf

    sess = tf.InteractiveSession()

    filename = 'song.mp3' # 30 second mp3 file
    SAMPLES_PER_SEC = 44100

    audio_binary = tf.read_file(filename)

    pcm = tf.contrib.ffmpeg.decode_audio(audio_binary, file_format='mp3', samples_per_second=SAMPLES_PER_SEC, channel_count = 1)
    stft = tf.contrib.signal.stft(pcm, frame_length=1024, frame_step=512, fft_length=1024)

    sess.close()

    The mp3 file is properly decoded because print(pcm.eval().shape) returns :

    (1323119, 1)

    And there are even some actual non-zero values when I print them with print(pcm.eval()[1000:1010]) :

    [[ 0.18793298]
    [ 0.16214484]
    [ 0.16022217]
    [ 0.15918455]
    [ 0.16428113]
    [ 0.19858395]
    [ 0.22861415]
    [ 0.2347789 ]
    [ 0.22684409]
    [ 0.20728172]]

    But for some reason print(stft.eval().shape) evaluates to :

    (1323119, 0, 513) # why the zero dimension?

    And therefore print(stft.eval()) is :

    []

    According to this the second dimension of the tf.contrib.signal.stft output is equal to the number of frames. Why are there no frames though ?