Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (54)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (9885)

  • Android Video color effect issue using FFMPEG [on hold]

    8 juin 2016, par umesh mishra

    I m facing one problem. when we use library for effect that i have mentioned below video is created only 1/3 of actual video size. please tell me what is issue. and also doesn’t work on marshmallow.
    https://github.com/krazykira/VidEffects/wiki/Permanent-video-effects

  • FFMPEG mosaic/side-by-side-compositing from simultaneous DirectShow input devices

    9 juin 2013, par timlukins

    This is what I'm trying to do :

    ffmpeg.exe -y \
    -f dshow -i video="Microsoft LifeCam Cinema" \
    -f dshow -i video="Microsoft LifeCam VX-2000" \
    -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" \
    -map "[fileout]" -vcodec libx264 -f flv out.flv

    Basically, I have 2 webcams and I would like to combine them into a single video file in which the frames are 2x1 in size with the frame from one camera in the left and the other on the right.

    In other words, what might be termed "mosaic-ing" or "side-by-side compositing". This is not concatenation - i.e. one file after the other (so not using the concat filter).

    I've gleamed that this use of -filter_complex to pad and then position the frames appears the prescribed way. Indeed, when I test this with files like so :

    ffmpeg.exe -y -i test1.flv -i test2.flv -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" -map "[fileout]" -vcodec libx264 -f flv testout.flv

    It works fine !

    With the "live" version however, both cameras seem to start (their lights come on) but the capture stalls.

    (Suspiciously like there is some DirectShow deadlock on the separate input device threads...)

    And so, I wonder is there some way to overcome this and force the two input stream's data to merge ?

    I have also tried the extended format of the dshow filter option like so as well :

    -f dshow -i video="Microsoft LifeCam Cinema":video="Microsoft LifeCam VX-2000"

    But only one camera is then selected (I suspect this option is really only to enable separate video and audio streams to be combined).

    I've also tried explicitly setting each input device to have the exact same frame size and rate with -f dshow -video_size 640x480 -framerate 30. No joy though. It still stalls once the camera is listed.

    Here is the tail end of the output (with -v debug on) :

    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option y (overwrite output files) with argument 1.
    Applying option v (set libav* logging level) with argument debug.
    Applying option filter_complex (create a complex filtergraph) with argument [0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout].
    Successfully parsed a group of options.
    Parsing a group of options: input file video=Microsoft LifeCam Cinema.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam Cinema.
    [dshow @ 00000000016e79a0] All info found
    [dshow @ 00000000016e79a0] Estimating duration from bitrate, this may be inaccurate
    Input #0, dshow, from 'video=Microsoft LifeCam Cinema':
     Duration: N/A, start: 1130406.072000, bitrate: N/A
       Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 333333/10000000, 30 tbr, 10000k tbn, 30 tbc
    Successfully opened the file.
    Parsing a group of options: input file video=Microsoft LifeCam VX-2000.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam VX-2000.
    [dshow @ 00000000016e79a0] real-time buffer 101% full! frame dropped!

    EDIT Further details trying to fix within the code...*

    I've always understood from past Windows DirectShow work that multiple calls to CoInitialize() on the same thread is bad. See here. Perhaps I've misunderstood how FFMPEG is multi-threaded (i.e. if each input device is on it's own thread) but I thought to just try regulating the call with a guard variable (a static int com_init = 0; - this should probably be mutex-ed...).

    e.g. in libavdevice/dshow.c method dshow_read_header

    889    if (com_init==0)
    890     CoInitialize(0);
    891    com_init++

    And similar for dshow_read_close

    170    com_init--;
    171    if (com_init==0)
    172     CoUninitialize()

    Sadly, this doesn't work. The first camera starts but the second doesn't and the error is :

    [dshow @ 0000000000301760] Could not set video options
    video=Microsoft LifeCam VX-2000: Input/output error

    (Worth a shot. Looks like each input device is indeed on the same thread...)

  • FFMPEG command line not using GPU when compressing MP4 file

    13 avril 2022, par StealthRT

    Hey all I have been working on a good command line string to use for my movies that I would like to trim the size down to at least half the current size.

    


    My handbrake information regaurding my GPU and computer system is this :

    


    HandBrake 1.5.1 (2022011000)
OS: Microsoft Windows NT 10.0.19043.0
CPU: Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz (12 Cores, 24 Threads)
Ram: 40940 MB, 
GPU Information:
  Microsoft Remote Display Adapter - 10.0.19041.662
  NVIDIA Tesla K10 - 30.0.14.7141
  NVIDIA Tesla K10 - 30.0.14.7141
  Microsoft Basic Display Adapter - 10.0.19041.868


    


    When I originally made a command line, I was just using it to copy the file over to where it needed to go with the following :

    


    


    ffmpeg -y -hide_banner -threads 8 -hwaccel cuda -hwaccel_device 1
-hwaccel_output_format cuda -v verbose -i "c :\testingvids\AEON FLUX 2005.mp4" -c:v h264_cuvid -gpu:v 1 -preset slow -c copy "c :\testingvids\AEON FLUX 2005 nvidia.mp4"

    


    


    This produced a 828x processing speed :

    


    enter image description here

    


    But for taking that same file and compressing it I seem to only get a 8x speed ?

    


    enter image description here

    


    So that is quite a difference there. Am I using the correct syntax for it to only use my GPU to convert/compress the mp4 with the h264 nvenc ?