Recherche avancée

Médias (91)

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10545)

  • ffmpeg send intra and gop over a traditionnal tcp (or udp) server

    27 décembre 2015, par the-owner

    The Problem :
    I want to stream a continuous sequence of bitmaps (a video if stuck).
    I want to send the stream over a QTcpSocket/Server (or a QUdpServer why not, or a traditional tcp/udp server), and with the help of ffmpeg librarie.

    Context and Conditions :
    I code in C++, under msvc2013 win-32, QT-IDE, ffmpeg dev 2.8.3.
    We start with a bitmap which is a raw format, a result from GDI+ that I have gotten.
    For very low client connection, so I will use a light video format compression MPEG-1.
    I already have a TCP-Server (QTcpServer), so my goal is to not rebuild another Server for it ...

    Potential algorithm ? :
    I have thought more on the encoding question than the procole one, assume that the protocol could be good.
    It is the very first time that I do it, I don’t want to enroll into something impossible, and obviously I want your insight about this problem which I think since last month (which I have not found a good answer on internet). So, below the outlines of the algorithm that I have imagined :

    Server Side :

    1. I convert the first BMP to YUV format thanks to sws_scale (), we name it P in the following (ffmpeg - libswscale)
    2. I (MPEG1-)encode the picture P (with avcodec_encode_video2()) I get the datas encoded
    3. I serialize the AVFrame/AVpicture P (maybe we will do this step just once)
    4. In the same time I push the datas into a QByteArray (or a simple byte array ?)
    5. I compress P (maybe with QCompress why not)
    6. I send it to the clients.
    7. If the step 6 goes well : we return to number 1 if the next image is a modulus of k, it will be an Intra otherwise a GOP
    8. Otherwise, I wait ? Or I can take into account the lost frame within the code, and I could jump it ? (Inside these question there are questions of server protocols)

    Client Side (summarized)

    1. We get the datas (new Intra or GOP),
    2. We check the error function on the datas received with respect to the other datas received
    3. If the error function return :
    4. "The Error is OK with this new Intra/GOP received" we start decoding them with ffmpeg, and we display the next image of the livestream.
    5. Otherwise, error is not OK, we take the error into account ... (it is not my main problem here)

    Questions :

    It is a solution that I imagined, If the question of server/client protocol are solved, my algorithm is it viable in order to do a livestream ? Could it work ? Is there a better solution ? (another more simpler algorithm or another librarie than ffmpeg for example or something else ?) a link ? Of course critics on my server protocol are welcome.

    I remind, it is just the outlines.

  • Memory issues when using ffmpeg and gloss to play videos

    21 décembre 2015, par Noughtmare

    I’m trying to make a video player with haskell using ffmpeg-light, JuicyPixels and gloss. I’m now able to play video, but frames that have been played stay in memory. This causes major memory issues. How can I avoid storing all the frames in memory ?

    Here is my code :

    {-# LANGUAGE FlexibleContexts #-}
    module Main where

    -- For my own code:
    import Graphics.Gloss
    import Codec.FFmpeg
    import Codec.FFmpeg.Juicy
    import Codec.Picture
    import Control.Applicative
    import Data.Maybe
    import Graphics.Gloss.Juicy
    import Control.Monad (when, join)
    import Codec.FFmpeg.Decode
    import Codec.FFmpeg.Enums
    import Control.Monad.Error.Class
    import Control.Arrow (first)
    import Control.Monad.Except (runExceptT)
    import Graphics.Gloss.Interface.IO.Animate
    import Data.IORef

    -- Temporary hardcoded resolution
    resolution :: (Int,Int)
    resolution = (640, 360)

    main :: IO ()
    main = do
       -- First initialize ffmpeg, this needs to be run before other ffmpeg functions
       initFFmpeg
       -- Open the samplevideo for reading. video :: IO (IO (Maybe (AVFrame, Double)), IO ())
       video <- runExceptT $ frameReaderTime' avPixFmtRgb24 "SampleVideo_640x360_1mb.flv"
       either
             -- This code gets called when the frameReader reports an error
             (const $ putStrLn "Can't read file")
             -- This opens a new window and plays the video in it on a white background
             (animateFixedIO (InWindow "Nice Window" resolution (10, 10)) white . frameAtWait . fst)

             video

    -- This finds the frame at given time
    frameAtWait :: IO (Maybe (AVFrame, Double)) -> Float -> IO Picture
    frameAtWait getFrame time = do
       -- This gets the next frame from the video
       (frame, t) <- fromJust <$> getFrame
       -- t has to be converted from Double to Float
       let t' = realToFrac t
           -- The difference between the requested time and the actual frame time
           difference = t' - time
       -- If the frame is not yet supposed to be shown
       if difference > 0 then do
           -- Wait until it is
           threadDelay . round . (* 1000000) $ difference
           -- then return it
           fromJust <$> frameToPicture frame
       else
           -- return it immediately
           fromJust <$> frameToPicture frame

    -- This function converts a ffmpeg internal AVFrame to a gloss picture
    frameToPicture :: AVFrame -> IO (Maybe Picture)
    frameToPicture frame = do
       -- convert it to a juicypixels dynamicimage
       dynImage <- toJuicy frame
       -- then convert it to a gloss picture and return it
       return . join $ fmap fromDynamicImage dynImage
  • ffmpeg ignores seg_duration with svt-av1

    25 décembre 2022, par Asav Mali

    I'm trying to convert a mp4 file containing a HEVC HDR10 stream to AV1 using SVT-AV1 with ffmpeg. For some reason the seg_duration parameter gets ignored :

    


    


    /usr/bin/ffmpeg -i "/tmp/VODProcessing/testclip.mp4" -map 0:0 -c:v
libsvtav1 -b:v 1048576 -maxrate:v 1101004 -bufsize 4194304 -vf
zscale=width=640:height=480 -svtav1-params
"tile-columns=0:tile-rows=0:scd=0:color-primaries=9:transfer-characteristics=16:matrix-coefficients=9:input-depth=10:fast-decode=1:mastering-display=G(0.265,0.69)B(0.15,0.06)R(0.68,0.32)WP(0.3127,0.329)L(4000.0,0.005):content-light=368,226:enable-hdr=1:color-range=1"
-pix_fmt yuv420p10le -color_trc smpte2084 -color_primaries bt2020 -colorspace bt2020nc -chroma_sample_location:v topleft -color_range:v pc -max_muxing_queue_size 1024 -preset 7 -bf 0 -force_key_frames
"expr:gte(t,n_forced*2)" -keyint_min 48 -use_timeline 1 -use_template
1 -map_metadata -1 -map_chapters -1 -f hls -seg_duration 4 -hls_time 4
-streaming 1 -hls_list_size 0 -hls_segment_filename "/tmp/VODProcessing/output/testclip/v-av01-480p-av01.0.12M.10_PQ/f-%04d.m4s"
-hls_fmp4_init_filename "init-v-av01-480p-av01.0.12M.10_PQ.m4s" -hls_segment_type fmp4 -hls_playlist_type vod -movflags frag_keyframe+frag_every_frame+write_colr+prefer_icc+skip_trailer+faststart
-hls_flags independent_segments -strict experimental "/tmp/VODProcessing/output/testclip/v-av01-480p-av01.0.12M.10_PQ/master.m3u8"

    


    


    Using a similar command and transcoding HEVC to HEVC works as expected and the segment duration is fine. Does this make any sense ? Checking the generated master.m3u8 file always gives me this for AV1 :

    


    #EXTM3U
#EXT-X-VERSION:7
#EXT-X-TARGETDURATION:7
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MAP:URI="init-v-av01-480p-av01.0.12M.10_PQ.m4s"
#EXTINF:6.715042,
f-0000.m4s
#EXTINF:6.715042,
f-0001.m4s
#EXTINF:6.715042,
f-0002.m4s
#EXTINF:6.715042,
f-0003.m4s
#EXTINF:6.715042,
f-0004.m4s
#EXTINF:6.715042,
f-0005.m4s
#EXTINF:6.715042,
f-0006.m4s
#EXTINF:6.715042,
f-0007.m4s
#EXTINF:6.715042,
f-0008.m4s
#EXTINF:6.715042,
f-0009.m4s
#EXTINF:6.715042,
f-0010.m4s
#EXTINF:6.715042,
f-0011.m4s
#EXTINF:6.715042,
f-0012.m4s
#EXTINF:6.715042,
f-0013.m4s
#EXTINF:6.715042,
f-0014.m4s
#EXTINF:6.715042,
f-0015.m4s
#EXTINF:0.583917,
f-0016.m4s
#EXT-X-ENDLIST


    


    As you can see, the segments are not 4 seconds long as defined in my command, they are 6.715042 seconds long, why that ???

    


    Thanks in advance