Recherche avancée

Médias (91)

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8954)

  • fate : Add test for vc1test demuxer

    23 octobre 2018, par Jun Zhao
    fate : Add test for vc1test demuxer
    

    Signed-off-by : Jun Zhao <mypopydev@gmail.com>

    • [DH] tests/fate/microsoft.mak
    • [DH] tests/ref/fate/vc1test_smm0005
    • [DH] tests/ref/fate/vc1test_smm0015
  • ffmpeg direct show obvoius problem. ffmpeg is not letting to reconnect to camera

    11 avril 2021, par josh joyer

    I use ffmpeg with command

    &#xA;

    ffmpeg -f dshow -i video="@device_pnp_\\?\usb#vid_1908&amp;pid_2311&amp;mi_00#6&amp;353461d3&amp;0&amp;0000#{65e8773d-8f56-`11d0-a3b9-00a0c9223196}\global" output.mkv`&#xA;

    &#xA;

    It is fine to record camera but it uses direct show standard&#xA;Microsoft Windows library that is faulty.&#xA;Once connected to camera one is unable to use camera again or disconnect from it.&#xA;One should restart computer to connect to camera again.&#xA;Microsoft tells not to use direct show (dshow) for camera but&#xA;to use AForge or VisioForge libraries.

    &#xA;

    But ffpmeg uses faulty dshow library. Is there any way to&#xA;change direct show to some other so one could connect to camer multiple times ?

    &#xA;

  • DXGI Desktop Duplication : encoding frames to send them over the network

    13 novembre 2016, par prazuber

    I’m trying to write an app which will capture a video stream of the screen and send it to a remote client. I’ve found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I’ve been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.

    The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options :

    1. Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn’t taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
    2. From this answer : convert D3D texture to IMFSample and use MediaFoundation’s SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven’t yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.

    Since I haven’t done anything like this before, I’m asking if I’m moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.

    Also, I’m wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.