Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (62)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9621)

  • Linux : Create a file for writing with controlled flushing to disk in large chunks [closed]

    12 août 2023, par Pete

    On Linux I have a process (ffmpeg) that writes very slowly (even slower than 1kb / s sometimes) to disk. Ffmpeg can buffer this to 256kb chunks that get written infrequently but ffmpeg hangs occasionally and if I try to detect these hangs by checking that the file is being updated I need to wait a long time between updates, up to 10 or 15 mins, otherwise I can sometimes mistakenly kill the ffmpeg process when it appears to have stopped writing when it fact its still filling its internal buffer.

    


    Theres no way to detect this it seems unless I use strace (that I can find anyway). So I am wondering about turning off buffering in ffmpeg and writing unbuffered to disk from ffmpeg.

    


    This will result in the disk constantly making tiny writes and wasting power (and probably, if I use a SSD, mess with wear levelling too).

    


    So I would like to make ffmpeg write to a 'virtual file' (in memory - either kernel memory or a process) which I can specify the flushing characteristics of. The idea being to perhaps specify flush every 2 minutes, then I can keep an eye on the file size and make sure its still being written.

    


    I don't think I've missed any other ways to do this job - even if I could watch the socket stream incoming to ffpmeg the process itself could still stop writing and lose data. Doing the buffering outside of ffmpeg seems like the best way.

    


    Is there a built in way to do this in Linux or does it mean a custom process ? I guess I know how to do this with a small C program and pipe the data in but I wonder if theres a neater way.

    


  • ffmpeg : nvidia gpu performance sub-optimal

    3 août 2021, par david furst

    the problem seems fairly basic : i'd like to create thumbnails from incoming video in the shortest time possible, and i'm trying to do this by offloading processing to an nvidia gpu.

    


    while i run ffmpeg, i'm monitoring the gpu usage with the nvidia-smi utility. gpu usage never goes above 15% and the amount of time to encode the thumbnails with gpu is only 10% less than the time required without the gpu. these performance levels are very disappointing.

    


    my question : am i going about this the wrong way (and if so, how should i go about it), or is this gpu performance 'normal'/'reasonable' ?

    


    SYSTEM INFORMATION

    


    the machine is a desktop pc running windows 10, 8gb ram, intel i7-7700. the gpu is an nvidia quadro pro 4000 with cuda 11.4 installed. ffmpeg is version N-101372-gb5cb8c8767-g2fc309e699+4 (2021) running under mingw, with --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-libnpp --enable-nvdec and --enable-nvenc.

    


    a typical ffmpeg command line i've used is :

    


     1 ffmpeg -hide_banner \
 2     -init_hw_device cuda=cuda:0 -filter_hw_device cuda \
 3     -hwaccel_output_format cuda \
 4     -i "$infile" \
 5     -vf "hwupload_cuda,scale_npp=w=200:h=150:format=yuv420p:interp_algo=lanczos,fps=1/1,hwdownload,format=yuv420p" \
 6     -y "$outdir/%08d.png"


    


    i've varied the above by supplementing some cuda-related parameters according to posts i've read here on stackoverflow and on the nvidia transcoding guide, but haven't been able to improve performance. adding any of -hwaccel cuda, -hwaccel cuvid, -hwaccel nvenc at the beginning of line 3 results in the error :
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'

    


    any pointers appreciated.

    


  • movenc : Allow setting start_dts/start_cts before writing actual packets

    3 novembre 2015, par Martin Storsjö
    movenc : Allow setting start_dts/start_cts before writing actual packets
    

    By writing a zero-sized packet, the caller can communicate the
    start_dts/start_cts for the stream without actually writing
    the first packet.

    This allows doing random-access writing of fragments when the
    start dts of the stream isn’t zero, so that the edit list in the moov
    is written based on timestamps from the nominal start time signaled
    via the zero-sized packet, while the first proper packet written
    corresponds to a later fragment.

    To avoid potential unexpected behaviour, empty packets only set
    start_dts if the frag_discont flag is set.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavformat/movenc.c