Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (12837)

  • ffmpeg custom buffer sink filter

    25 novembre 2018, par NadavRub

    Environment

    • Ubuntu 18.04
    • C++
    • ffmpeg 3.4 (git master)
    • ffmpeg is used as a shared lib (InProc) via the C++ API

    Use-case

    • Per this link I am trying to use the ’avfilter_graph_*’ APIs to create ffmpeg graph
    • I would like the graph output to be sent out to my custom code ( part of the hosting application )

    Considered implementations

    • [A] Implement a custom sink filter ( part of libavfilter ) to implement my custom logic
    • [B] Implement a custom sink filter to grab the output samples and send them out to my application ( something similar to DShow SampleGrabber )

    Problem at hand

    With either of the above mentioned approaches the ffmpeg code has to be modified, and this impose an overhead in supporting future ffmpeg releases

    I wonder if there is any straight forward approach for an external ( hosting ) application to grab the graph output w/ minimal copying of the payload.

    Is there any way to use a custom AVIOContext to achieve that ? can I construct a graph connected to an output AVIOContext ? can I create a custom filter implemented in a module external to libavfilter and associate it w/ the graph using ’AVFilterContext’ ?

  • ffmpeg output file smaller than input file

    3 mai 2020, par Debug255

    I am using ffmpeg to rotate videos 90 or 180 degrees in a Python script. It works great. But, I am curious as to why the output file would be a smaller amount of bytes than the input file.

    



    Here are the commands I use :

    



    180 degrees :

    



    ffmpeg -i ./input.mp4 -preset veryslow -vf "transpose=2,transpose=2,format=yuv420p" -metadata:s:v rotate=0 -codec:v libx264 -codec:a copy ./output.mp4

    



    90 degrees :

    



    ffmpeg -i ./input.mp4 -vf "transpose=2" ./output.mp4

    



    For example, a GoPro Hero 3 MP4 file was originally 2.0 GB. The resulting output file was 480.9 MB. Another GoPro file was 2.0 and its resulting file was 671.5 MB. Is this maybe because the GoPro files were 2.0 but contains empty space, sort of like how some NTFS filesystems make a minimal 4k file, even when there is less bytes in it ?

    



    If this isn't the GoPro Hero 3, how do I rotate the files 90 or 180 degrees but ensure the output file size is the same ? Or, is data loss expected ? Does the data loss have to do with the format ?

    



    Note that the quality of the video doesn't appear to be damaged, which is good. So, I am interested in learning more about why this is happening, then I can read the section of ffmpeg documentation that is relevant to this.

    



    Thank you !

    


  • How to stream the video from one PC to another with an acceptable quality and synchronization ?

    15 juin 2021, par ErickSkrauch

    I have the following task : to organize the broadcast of several gamers on the director's computer, which will switch the image to, to put it simply, the one who currently has more interesting gameplay.

    


    The obvious solution would be to raise an RTMP server and broadcast to it. We tried that. The image quality clearly correlates with the bitrate of the broadcast, but the streams aren't synchronized and there is no way to synchronize them. As far as I know, it's just not built into the RTMP protocol.

    


    We also tried streaming via UDP, SRT and RTSP protocols. We got minimal delay but a very blurry image and artifacts from lost packets. It feels like all these formats are trying to achieve constant FPS and sacrifice the quality.

    


    What we need :

    


      

    • A quality image.
    • 


    • Broken frames can be discarded (it's okay to have not constant FPS).
    • 


    • Latency isn't important.
    • 


    • The streams should be synchronized within a second or two.
    • 


    


    There is an assumption that broadcasting on UDP should be a solution, but some kind of intermediate buffer is needed to provide the necessary broadcasting conditions. But I don't know how to do that. I assume that we need an intermediate ffmpeg instance, which will read the incoming stream, buffer it and publish the result to some local port, from which the picture will be already taken by the director's OBS.

    


    Is there any solution to achieve our goals ?