Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (10281)

  • LibAV - what approach to take for realtime audio and video capture ?

    26 juillet 2012, par pollux

    I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
    all fine and I've streamed for more then 48 hours w/o any problems ! My next step
    is to add audio to the stream. I'll be capturing live audio and I want to encode it
    in real time using speex, mp3 or nelly moser.

    Background info

    I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
    and I have 2 channels. Because I might be mixing terminology, this is how I use the
    data :

    // input = array with audio samples
    // bufferSize = 256
    // nChannels = 2
    void audioIn(float * input, int bufferSize, int nChannels) {
       // convert from float to S16
           short* buf = new signed short[bufferSize * 2];
       for(int i = 0; i < bufferSize; ++i) {  // loop over all samples
           int dx = i * 2;
           buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max();  // convert frame  of the first channel
           buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max();  // convert frame  of the second channel
       }

           // add this to the libav wrapper.
       av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);

       delete[] buf;
    }
    </short></short>

    Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
    wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
    the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.

    Then, especially the line :
    If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).

    After encoding I call av_interleaved_write_frame() to actually write the frame.
    When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.

    Questions :

    • There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder ? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up ?

    • As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this ? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too : https://gist.github.com/62f717bbaa69ac7196be

    • I compiled with —enable-debug=3 and disable optimizations, but I'm not seeing any
      debug information. How can I make libav more verbose ?

    Thanks !

  • How to optimize ffmpeg w/ x264 for multiple bitrate output files

    10 octobre 2013, par Jonesy

    The goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.

    Source file       MPEG-2 Video (Letterboxed) 1920x1080 @>10Mbps
                     MPEG-1 Audio @ 384Kbps
    Destiation files  H264 Video 720x400 @ multiple bitrates
                     AAC Audio @ 128Kbps
    Machine           Multi-core Processor

    The video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset

    VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400

    The first approach was to encode them all in parallel processes

    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 &
    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 &
    ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
    

    The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result ?

    The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.

    ffmpeg -y -i $INPUT_FILE \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4

    However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.

    To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.

    INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"

    $INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
    $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4

    Does anyone see potential problems with doing it this way, or know of a better method ?

  • FFmpeg : how to use C++ code to change the frame rate of a video file ?

    27 mars 2014, par user1914692

    I know it would be easier to use FFmpeg commands to change the frame rate of a video file.
    But anyway, if I want to do it in C++ code, and use FFmpeg libraries, how could I do it ?

    I think I should've be able to find out the clue in the source code.
    Just before proceeding, I hope there would be some good introductions or examples.