Recherche avancée

Médias (0)

Mot : - Tags -/inscription3

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (42)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (7212)

  • FFMPEG command line not using GPU when compressing MP4 file

    13 avril 2022, par StealthRT

    Hey all I have been working on a good command line string to use for my movies that I would like to trim the size down to at least half the current size.

    


    My handbrake information regaurding my GPU and computer system is this :

    


    HandBrake 1.5.1 (2022011000)
OS: Microsoft Windows NT 10.0.19043.0
CPU: Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz (12 Cores, 24 Threads)
Ram: 40940 MB, 
GPU Information:
  Microsoft Remote Display Adapter - 10.0.19041.662
  NVIDIA Tesla K10 - 30.0.14.7141
  NVIDIA Tesla K10 - 30.0.14.7141
  Microsoft Basic Display Adapter - 10.0.19041.868


    


    When I originally made a command line, I was just using it to copy the file over to where it needed to go with the following :

    


    


    ffmpeg -y -hide_banner -threads 8 -hwaccel cuda -hwaccel_device 1
-hwaccel_output_format cuda -v verbose -i "c :\testingvids\AEON FLUX 2005.mp4" -c:v h264_cuvid -gpu:v 1 -preset slow -c copy "c :\testingvids\AEON FLUX 2005 nvidia.mp4"

    


    


    This produced a 828x processing speed :

    


    enter image description here

    


    But for taking that same file and compressing it I seem to only get a 8x speed ?

    


    enter image description here

    


    So that is quite a difference there. Am I using the correct syntax for it to only use my GPU to convert/compress the mp4 with the h264 nvenc ?

    


  • FFMPEG mosaic/side-by-side-compositing from simultaneous DirectShow input devices

    9 juin 2013, par timlukins

    This is what I'm trying to do :

    ffmpeg.exe -y \
    -f dshow -i video="Microsoft LifeCam Cinema" \
    -f dshow -i video="Microsoft LifeCam VX-2000" \
    -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" \
    -map "[fileout]" -vcodec libx264 -f flv out.flv

    Basically, I have 2 webcams and I would like to combine them into a single video file in which the frames are 2x1 in size with the frame from one camera in the left and the other on the right.

    In other words, what might be termed "mosaic-ing" or "side-by-side compositing". This is not concatenation - i.e. one file after the other (so not using the concat filter).

    I've gleamed that this use of -filter_complex to pad and then position the frames appears the prescribed way. Indeed, when I test this with files like so :

    ffmpeg.exe -y -i test1.flv -i test2.flv -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" -map "[fileout]" -vcodec libx264 -f flv testout.flv

    It works fine !

    With the "live" version however, both cameras seem to start (their lights come on) but the capture stalls.

    (Suspiciously like there is some DirectShow deadlock on the separate input device threads...)

    And so, I wonder is there some way to overcome this and force the two input stream's data to merge ?

    I have also tried the extended format of the dshow filter option like so as well :

    -f dshow -i video="Microsoft LifeCam Cinema":video="Microsoft LifeCam VX-2000"

    But only one camera is then selected (I suspect this option is really only to enable separate video and audio streams to be combined).

    I've also tried explicitly setting each input device to have the exact same frame size and rate with -f dshow -video_size 640x480 -framerate 30. No joy though. It still stalls once the camera is listed.

    Here is the tail end of the output (with -v debug on) :

    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option y (overwrite output files) with argument 1.
    Applying option v (set libav* logging level) with argument debug.
    Applying option filter_complex (create a complex filtergraph) with argument [0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout].
    Successfully parsed a group of options.
    Parsing a group of options: input file video=Microsoft LifeCam Cinema.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam Cinema.
    [dshow @ 00000000016e79a0] All info found
    [dshow @ 00000000016e79a0] Estimating duration from bitrate, this may be inaccurate
    Input #0, dshow, from 'video=Microsoft LifeCam Cinema':
     Duration: N/A, start: 1130406.072000, bitrate: N/A
       Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 333333/10000000, 30 tbr, 10000k tbn, 30 tbc
    Successfully opened the file.
    Parsing a group of options: input file video=Microsoft LifeCam VX-2000.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam VX-2000.
    [dshow @ 00000000016e79a0] real-time buffer 101% full! frame dropped!

    EDIT Further details trying to fix within the code...*

    I've always understood from past Windows DirectShow work that multiple calls to CoInitialize() on the same thread is bad. See here. Perhaps I've misunderstood how FFMPEG is multi-threaded (i.e. if each input device is on it's own thread) but I thought to just try regulating the call with a guard variable (a static int com_init = 0; - this should probably be mutex-ed...).

    e.g. in libavdevice/dshow.c method dshow_read_header

    889    if (com_init==0)
    890     CoInitialize(0);
    891    com_init++

    And similar for dshow_read_close

    170    com_init--;
    171    if (com_init==0)
    172     CoUninitialize()

    Sadly, this doesn't work. The first camera starts but the second doesn't and the error is :

    [dshow @ 0000000000301760] Could not set video options
    video=Microsoft LifeCam VX-2000: Input/output error

    (Worth a shot. Looks like each input device is indeed on the same thread...)

  • How to stream frames from OpenCV C++ code to Video4Linux or ffmpeg ?

    6 juillet 2022, par usamazf

    I am experimenting with OpenCV to process frames from a video stream. The goal is to fetch a frame from a stream, process it and then put the processed frame to a new / fresh stream.

    


    I have been able to successfully read streams using OpenCV video capture functionality. But do not know how I can create an output stream with the processed frames.

    


    In order to do some basic tests, I created a stream from a local video file using ffmpeg like so :

    


    ffmpeg -i sample.mp4 -v 0 -vcodec mpeg4 -f mpegts \
        "udp://@127.0.0.1:23000?overrun_nonfatal=1&fifo_size=50000000"


    


    And in my C++ code using the VideoCapture functionality of the OpenCV library, I am able to capture the above created stream. A basic layout of what I am trying to achieve is attached below :

    


    cv::VideoCapture capture("udp://@127.0.0.1:23000?overrun_nonfatal=1&fifo_size=50000000", cv::CAP_FFMPEG);

cv::Mat frame;

while (true) 
{
    // use the above stream to capture a frame
    capture >> frame;
    
    // process the frame (not relevant here)
    ...

    // finally, after all the processing is done I 
    // want to put this frame on a new stream say at
    // udp://@127.0.0.1:25000, I don't know how to do
    // this, ideally would like to use Video4Linux but
    // solutions with ffmpeg are appreciated as well
}


    


    As you can see from comment in above code, I don't have any idea how I should even begin handling this, I tried searching for similar questions but all I could find was how to do VideoCapture using streams, nothing related to outputting to a stream.

    


    I am relatively new to this and this might seem like a very basic question to many of you, please do excuse me.