Recherche avancée

Médias (91)

Autres articles (45)

  • MediaSPIP : Modification des droits de création d’objets et de publication définitive

    11 novembre 2010, par

    Par défaut, MediaSPIP permet de créer 5 types d’objets.
    Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
    Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (7140)

  • How to synchronize audio and video using FFmpeg from 2 different input and stream it over the network via RTP, in c++ ?

    5 novembre 2018, par ElPablo

    I am currently trying to develop an app in C++ that perform all of this :

    • Capture Video of the desktop
    • Capture Audio of the desktop
    • Video & Audio processing
    • Stream Audio & Video to another computer

    For this I am using OpenCV and FFmpeg libraries.

    I succeed to capture the video, with openCV, convert it in an AVFrame, encoding the frame and send it over the network with FFmpeg.

    For the audio, I also succeed (with the help of the FFmpeg documentation, transcode_aac.c) to capture the audio of my audio card, decoding, convert, encoding and send it over the network.

    Then I go to my other computer, and I read the 2 Streams with FFplay :

    ffplay -loglevel repeat+level+verbose -probesize 32 -sync ext -i config.sdp -protocol_whitelist file,udp,rtp

    It works, I have the video and the audio .. but .. The sound is not at all synchronize with the video, it is like about 3 sec later.

    My code is like this :

    I am using, 3 AVFormatContext :

    • audio input
    • video output
    • audio output

    I did that because RTP can only take one stream, so I had to separate Audio and Video.

    So basically, I have 2 input and I need 2 output.

    I know how to do that in command line with FFmpeg (and it works it is synchronize) but I have no idea how to do that and synchronize the streams in C++.

    My guesses are :

    • I have to play with time_base attribute of the packets during
      encoding => but how can I synchronize packet from two different
      AVStream and AVFormatContext ?
    • Do I have to set the time_base attribute of the output audio with the
      input audio or with my 30 FPS that I want ? Same question with output
      Video

    Further information :

    • The video is captured using this
      OPENCV Desktop Capture
      then convert with this function sws_scale() into an AVFrame

    • I am using 4 Thread (Video Capture, Video processing, Audio Decoding,
      Audio processing)

    So guys, if you have any ideas how to synchronize audio and video, or other tips that can help me, it will be with pleasure.

    Thx

  • "Automatic" switchable graphics on desktop, is there a way to disable them ?

    14 août 2021, par Hab-Land0

    Recently, I've updated my graphic drivers for a new system I built, a mix between an amd apu and an nvidia quadro. But I stumbled upon a rare problem, every time I tried to use OpenCL acceleration on ffmpeg for libx264 encoding, ffmpeg notifies me with the next line :

    


    [libx264 @ 0000028149222780] OpenCL acceleration disabled, switchable graphics detected


    


    When searching this line on ffmpeg's code, apparently occurs when the "main OpenCL driver" (if you can call it that) is redirected in such a way that tries to use both devices (Code).

    


    My obvious next step was to search everything I could around this "switchable graphics", but almost all the tutorials on websites told me that I should search around the driver's settings, but literally either Radeon Software or Nvidia's control panel don't display any option about it (It is worth to say that almost all of the tutorials refer to laptops with dedicated graphics and were very outdated).

    


    Another way I use OpenCL is for vapoursynth's filters, such as KNLMeansCL. And, when I make use of this filter, task manager detects that both AMD's APU and Nvidia's gpu are being used simultaneously (I guess that's how the switchable graphics actually works, and partially complementing why x264 OpenCL doesn't work).

    


    My main complain with this is that I attempt to use AMD as a display driver and let Nvidia do the hard work, and I actually was able to do that before updating my drivers. And, talking about the "updates" more in-depth, I updated nvidia's from "462.59" to "471.11" and, unfortunately, I can't remember what versions were my AMD drivers.

    


    Edit : the only way I can make full use of NVIDIA's card is by using it as my main display, but that also apparently disables AMD's igpu, I am not sure if its even able to be used on small tasks (like those that were previously mentioned)

    


  • Is Replacing Dynamic Resolution with scale_amf in FFmpeg Command a Good Direction ?

    21 novembre 2024, par fred

    I'm working on a Lua script for MPV that processes 360-degree videos using FFmpeg's v360 filter. The original command dynamically calculates the output resolution based on a res variable, like this :

    


    mp.command_native_async({
    "no-osd", "vf", "add", 
    string.format(
        "@vrrev:v360=%s:%s:reset_rot=1:in_stereo=%s:out_stereo=2d:
id_fov=%s:d_fov=%.3f:yaw=%.3f:pitch=%s:roll=%.3f:
w=%s*192.0:h=%.3f*108.0:h_flip=%s:interp=%s",
        in_flip, inputProjection, outputProjection, in_stereo, idfov, dfov, yaw, pitch, roll, res, res, h_flip, scaling
    )
}, updateComplete)


    


    Change Proposal :

    


    I am considering replacing the dynamic width and height calculations with a scale_amf filter to handle scaling more efficiently and leverage GPU acceleration. The updated command would look like this :

    


    mp.command_native_async({
    "no-osd", "vf", "add", 
    string.format(
        "@vrrev:v360=%s:%s:reset_rot=1:in_stereo=%s:out_stereo=2d:
id_fov=%s:d_fov=%.3f:yaw=%.3f:pitch=%s:roll=%.3f,
%sscale_amf=w=%.0f:h=%.0f",
        inputProjection, outputProjection, in_stereo, idfov, dfov, yaw, pitch, roll, in_flip, res * 192.0, res * 108.0
    )


    


    Hardware Specifications :

    


    I am using an AMD Ryzen 5 5600G, no display card, 16GB RAM, Windows 10.

    


    Questions :

    


    Is using scale_amf for scaling a good direction in terms of performance and efficiency ?
Are there any potential drawbacks to this approach that I should be aware of ?
How does using scale_amf compare to the original dynamic resolution method in terms of output quality and processing speed ?
}, updateComplete)

    


    Any insights or experiences with this change would be greatly appreciated !