Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (12174)

  • How to sum audio from two streams in ffmpeg

    9 février 2021, par user3188445

    I have a video file with two audio streams, representing two people talking at different times. The two people never talk at the same time, so there is no danger of clipping by summing the audio. I would like to sum the audio into one stream without reducing the volume. The ffmpeg amix filter has an option that would seem to do what I want, but the option does not seem to work. Here are two minimal non-working examples (the audio tracks are [0:2] and [0:3]) :

    


    ffmpeg -i input.mkv -map 0:0 -c:v copy \
       -filter_complex '[0:2][0:3]amix' \
       output.m4v

ffmpeg -i input.mkv -map 0:0 -c:v copy \
       -filter_complex '[0:2][0:3]amix=sum=sum' \
       output.m4v


    


    The first example diminishes the audio volume. The second example is a syntax error. I tried other variants like amix=sum and amix=sum=1, but despite the documentation I don't think the sum option exists any more. ffmpeg -h filter=amix does not mention the sum option (ffmpeg version n4.3.1).

    


    My questions :

    


      

    1. Can I sum two audio tracks with ffmpeg, without losing resolution. (I'd rather not cut the volume in half and scale it up, but if there's no other way I guess I'd accept and answer that sacrifices a bit.)

      


    2. 


    3. Is there an easy way to adjust the relative delay of one of the tracks by a few milliseconds ?

      


    4. 


    


  • Trying to capture display output for real-time analysis with OpenCV ; I need help with interfacing with the OS for input

    26 juillet 2024, par mirari

    I want to apply operations from the OpenCV computer vision library, in real time, to video captured from my computer display.
The idea in this particular case is to detect interesting features during gameplay in a popular game and provide the user with an enhanced experience ; but I could think of several other scenarios where one would want to have live access to this data as well. 
At any rate, for the development phase it might be acceptable using canned video, but for the final application performance and responsiveness are obviously critical.

    



    I am trying to do this on Ubuntu 10.10 as of now, and would prefer to use a UNIX-like system, but any options are of interest.
My C skills are very limited, so whenever talking to OpenCV through Python is possible, I try to use that instead.
Please note that I am trying to capture NOT from a camera device, but from a live stream of display output ; and I'm at a loss as to how to take the input. As far as I can tell, CaptureFromCAM works only for camera devices, and it seems to me that the requirement for real-time performance in the end result makes storage in file and reading back through CaptureFromFile a bad option.

    



    The most promising route I have found so far seems to be using ffmpeg with the x11grab option to capture from an X11 display ;
(e.g. the command
ffmpeg -f x11grab -sameq -r 25 -s wxga -i :0.0 out.mpg
captures 1366x768 of display 0 to 'out.mpg').
I imagine it should be possible to treat the output stream from ffmpeg as a file to be read by OpenCV (presumably by using the CaptureFromFile function) maybe by using pipes ; but this is all on a much higher level than I have ever dealt with before and I could really use some directions. 
Do you think this approach is feasible ? And more importantly can you think of a better one ? How would you do it ?

    


  • Emphasis level map with ffmpeg using NVENC

    2 novembre 2020, par ofer rubinstein

    I am trying to have different compression levels, to different regions in the video.
ffmpeg has something called "addroi", but it's just a recommendation and doesn't guarantee the absolute level of compression.
Nvidia NVENC has something called Emphasis Level Map, I am hoping this will be able to achieve an accurate level of compression per region.

    


    I am unable to find how to do this via ffmpeg executable, so I am starting to try to do this using the ffmpeg SDK.
Do you have any idea how to do this via the ffmpeg.exe, instead of using the SDK ?

    


    Here is what I am talking about in the NVIDIA documentation :

    


    https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-video-encoder-api-prog-guide/index.html#emphasis-map