Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (62)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8912)

  • Recording alpine container audio

    28 juin 2021, par Sackhugger

    I am looking to run a headless browser inside of an alpine linux container and stream the audio/video through ffmpeg to an rtmp server. Grabbing video from x11 and xvfb has been no issue but I have not been able to find out how to grab audio using alsa/pulse audio without any kind of audio card. Is there a way to create a virtual audio card which ffmpeg can pull from. I understand alpine is minimal but I need to keep my container image to a minimum.

    


  • Python : mp3 to wave.open(f,'r') through ffmpeg pipe

    7 avril 2015, par user2754098

    I’m trying to decode mp3 to wav using ffmpeg :

    import alsaaudio
    import wave
    from subprocess import Popen, PIPE

    with open('filename.mp3', 'rb') as infile:
       p=Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-'], stdin=infile, stdout=PIPE)
       ...

    Next i want redirect data from p.stdout.read() to wave.open(file, r) to use readframes(n) and other methods. But i cannot because ’file’ in wave.open(file,’r’) can be only name of file or an open file pointer.

       ...
       file = wave.open(p.stdout.read(),'r')
       card='default'
       device=alsaaudio.PCM(card=card)
       device.setchannels(file.getnchannels())
       device.setrate(file.getframerate())
       device.setformat(alsaaudio.PCM_FORMAT_S16_LE)
       device.setsetperiodsize(320)
       data = file.readframes(320)
       while data:
           device.write(data)
           data = file.readframes(320)

    I got :

    TypeError: file() argument 1 must be encoded string without NULL bytes, not str

    So is it possible to handle data from p.stdout.read() by wave.open() ?
    Making temporary .wav file isn’t solution.

    Sorry for my english.
    Thanks.

  • How can ffmpeg be made as efficient as Android's built-in video viewer ?

    6 janvier 2016, par Nicholas

    I have a project based off of https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/, which uses an older copy of the ffmpeg libraries from http://bambuser.com/opensource. Within the C++ code in this project we have the following lines of code :

           unsigned long long current = GetCurrentTimeInNanoseconds();
           avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
           __android_log_print(ANDROID_LOG_DEBUG, "getFrame>>>>", "decode video time: %llu", (GetCurrentTimeInNanoseconds() - current)/1000000);

    This code continually reports between 60 and 90 ms to decode each frame on an Xperia Ion, using a 1280x720 h264 source video file. Other processing to get the frame out to the screen takes an average of 30ms more with very little variation. This leads to frame rates of 10-11fps.

    Ignoring that other processing, a decode that takes an average of 75ms would result in 13fps. However, when I browse my SD card and click on that mp4 file to open it in the native viewer, it shows at a full 30fps. Further, when I open a 1920x1080 version of the same mp4 in the native viewer it also runs at a full 30fps without stutter or lag. This implies (to my novice eye) that something is very very wrong, as the hardware is obviously capable of decoding many times faster.

    What flags or options can be passed to avcode_decode_video to optimize decode speed to match that of the native viewer ? Can optimizations be made elsewhere to optimize speed further ? Is there a reason that the native viewer can decode almost an order of magnitude faster (taking into account the 1920x1080 source results) ?

    EDIT

    The answer below is very helpful, but is not practical for me at this time. In the mean time I have managed to decrease decoding time by 70% with some optimal encoding flags found through many many hours of trial and error. Here are the ffmpeg arguments I’m using for encoding in case it helps anyone else who stumbles across this post :

           ffmpeg.exe -i "#inputFilePath#" -c:v libx264 -preset veryslow -g 2 -y -s 910x512 -b 5000k -minrate 2000k -maxrate 8000k -pix_fmt yuv420p -tune fastdecode -coder 0 -flags -loop -profile:v main -x264-params subme=5:ref=4 "#ouputFilePath#"

    With these settings ffmpeg is decoding frames in 20-25 seconds, though with the sws_scale and then writing out to the texture I’m still hovering at 22 FPS on an Xperia Ion at a lower resolution than I’d like.