Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (13307)

  • How can I stream raw video frames AND audio to FFMPEG with Python 2.7 ?

    18 novembre 2017, par Just Askin

    I am streaming raw video frames from Pygame to FFMPEG, then sending to a rtmp stream, but for the life of me, I can’t figure out how to send live audio using the same Python module. It does not need to be the Pygame mixer, but I am not opposed to using it if that is where the best answer lies. I’m pretty sure it’s not though.

    My question is this : What is the best strategy to send live audio output from a program to FFMPEG along with raw video frames simultaneously from the same Python module ?

    My program is large, and eventually I would like to build options to switch audio inputs from a queue of music, a microphone, or any other random sounds from any program I want to use. But for the time being, I just want something to work. I am starting off with a simple Espeak command.

    Here is my Python commands :

    command = ['ffmpeg', '-re', '-framerate', '22', '-s', '1280x720', '-pix_fmt', 'rgba', '-f', 'rawvideo', '-i', '-', '-f', 's16le', '-ar', '22500', '-i', '/tmp/audio', '-preset', ultrafast', '-pix_fmt', 'rgba', '-b:v', '2500', '-s', 'hd720', '-r', '25', '-g', '50', '-crf', '20', '-f', 'flv', 'rtmp://xxx' ]

    pipe = sp.Popen(command, stdin=sp.PIPE)

    Then I send my frames to stdin from within my main while True: loop.

    The problem I run into with this strategy is I can’t figure out how to shove audio into FFMPEG from within Python without blocking the pipe. After hours of research, I am pretty confident I can’t use the pipe to send the audio along with the frames. I thought the named pipe was my solution (which works running Espeak outside of Python), but it blocks Python until the Espeak is done... so no good.

    I assume I need threading for multiprocessing, but I cannot figure out from the official documentation or any other resources as to how I can solve my problem with it.

    The ['-f', 's16le', '-ar', '22500', '-i', '/tmp/audio'] are settings that work if I run espeak from a separate terminal with espeak 'some text' --stdout > /tmp/audio.

    I am using Centos 7, Python 2.7, pygame, the latest build of FFMPEG,

  • OpenCV Alpha Channel support

    14 février 2014, par adriagil

    I've tried many different solutions but I'm stuck at this point.

    I have a sequence of .png files with alpha channel.
    If I pick one of the files for splitting the channels I got the expected result in an array[4] having the alpha channel

    Mat check = imread("1.png");

    printf("channels = %d", check.channels()); //got 'channels = 4'

    Then I expected to get the same results for a movie file.

    With FFMPEG I've just converted the .png sequence to a .mov file with "qtrle" codec that I'm sure that support alpha channel.

    ffmpeg -pix_fmt argb -i sequence_%d.png -vcodec qtrle output.mov

    Then the I process the video file frames with OpenCV

    Mat frame;
    VideoCapture cap ("output.mov");
    if (cap.grab())
       cap.retrieve(frame);
    printf("channels = %d", frame.channels()); // got 'channels = 3'

    I've checked the ffmpeg generated output and seems to be encoded right and have the alpha channel stored.

    Does OpenCV does not support Alpha Channel in movie files ?
    If so, anyone knows an alternative to do it with C++ or other libraries ?
    Can this be done with DirectX in some way (only using OpenCV for reading video) ?

    In the official docs I've found that cv::VideocCapture.retrieve() has a second argument for the 'channel' but I've tried to do the following with the same results (no alpha channel) :

    cap.retrieve(frame, 4);
    cap.retrieve(frame, -1);

    As far as cv::VideoCapture supports loading image sequences I've tried to load the PNG sequence but I got the following warning so I could not play the movie file :

    VideoCapture cap("sequence_%d.png");
    warning: Could not find codec parameters (../../modules/highgui/src/cap_ffmpeg_impl.hpp)

    Why I got that result if I can read the same PNG with imread("") ?

    Also I've tried to encode the .png sequence again with ffmpeg :

    ffmpeg -pix_fmt rgba -i sequence_%d.png -vcodec png output.mov

    But got exactly the same warning as before.

    Any suggestion would be much appreciated !

    Note : I'm using OpenCV 2.4.2 right now...maybe updating to 2.4.8 may solve the problem ?

  • The question about ffmpeg drawtext filter [closed]

    5 mai 2024, par B1GGersnow

    I tried to use rockchip(aarch64) hardware acceleration and add a drawtext filter to add watermarks. However, squares appear when multiple Chinese fonts are added.

    


    This is my compilation parameter.

    


    ./configure --prefix=/usr --enable-gpl --enable-version3 --enable-libdrm --enable-rkmpp --enable-rkrga --enable-filter=drawtext --enable-libharfbuzz --enable-libfreetype --enable-libfontconfig --enable-libfreetype --enable-libfribidi


    


      

    1. ffmpeg -hwaccel rkmpp -hwaccel_output_format drm_prime -i 1.mp4 -vf scale_rkrga=w=1920:h=1080,hwdownload,format=nv12,drawtext=text='中文':fontfile=msyh.ttc:fontsize=200 -c:v h264_rkmpp -y -t 10  output.mp4
img1

      


    2. 


    3. ffmpeg -hwaccel rkmpp -hwaccel_output_format drm_prime -i 1.mp4 -vf "scale_rkrga=w=1920:h=1080,hwdownload,format=nv12,drawtext=text='中文字幕测试':fontfile=msyh.ttc:fontsize=200" -c:v h264_rkmpp -y -t 10  output.mp4
img2

      


    4. 


    


    But I try to use apt install ffmpeg to install ffmpeg which is officially maintained by Ubuntu. I got the right result. So I think it's the library, and I'm trying to compile using the official library.

    


    ./configure --prefix=/usr --enable-gpl --enable-version3 --enable-filter=drawtext --enable-libharfbuzz --enable-libfreetype --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libx264


    


    I still get garbled results.

    


    Is this because there is something wrong with my compilation dependent library ?

    


    When I compiled it with the same parameters on wsl, there was no garbled code.