Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (72)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9850)

  • FFMPEG android version without openSSL

    21 juillet 2016, par Jose Gonzalez

    Is there a version of FFMPEG for Android without openSSL ? Google play it’s not accepting my app because of a security issue with openSSL, I’m looking for a way to get rid of it.

  • change wav, aiff or mov audio sample rate of MOV or WAV WITHOUT changing number of samples

    6 mars 2013, par John Pilgrim

    I need a very precise way to speed up audio.
    I am preparing films for OpenDCP, an open-source tool to make Digital Cinema Packages, for screening in theaters.
    My source files are usually quicktime MOV files at 23.976fps with 48.000kHz audio.
    Sometimes my audio is a separate 48.000kHz WAV.
    (FWIW, the video frame rate of the source is actually 24/100.1 frames per second, which is a repeating decimal.)

    The DCP standard is based around a 24.000fps and 48.000kHz program, so the audio and video of the source need to be sped up.
    The image processing workflow inherently involves converting the MOV to a TIF sequence, frame-per-frame, which is then assumed to be 24.000fps, so I don't have to get involved in the internals of the QT Video Media Handler.

    But speeding up the audio to match is proving to be difficult. Most audio programs cannot get the number of audio samples to line up with the retimed image frames. A 0.1% speed increase in Audacity results in the wrong number of samples. The only pathway that I have found that works is to use Apple Cinema Tools to conform the 23.976fps/48.000kHz MOV to 24.000fps/48.048kHz (which it does by changing the Quicktime headers) and then using Quicktime Player to export the audio from that file at 48.000kHz, resampling it. This is frame accurate.

    So my question is : are there settings in ffmpeg or sox that will precisely speed up the audio in a MOV or in a WAV or AIFF precisely ? I would like a cross platform solution, so I am not dependent on Cinema Tools, which is only MacOS.

    I know this is a LOT of background. Feel free to ask clarifying questions !

  • LibAV - what approach to take for realtime audio and video capture ?

    26 juillet 2012, par pollux

    I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
    all fine and I've streamed for more then 48 hours w/o any problems ! My next step
    is to add audio to the stream. I'll be capturing live audio and I want to encode it
    in real time using speex, mp3 or nelly moser.

    Background info

    I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
    and I have 2 channels. Because I might be mixing terminology, this is how I use the
    data :

    // input = array with audio samples
    // bufferSize = 256
    // nChannels = 2
    void audioIn(float * input, int bufferSize, int nChannels) {
       // convert from float to S16
           short* buf = new signed short[bufferSize * 2];
       for(int i = 0; i < bufferSize; ++i) {  // loop over all samples
           int dx = i * 2;
           buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max();  // convert frame  of the first channel
           buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max();  // convert frame  of the second channel
       }

           // add this to the libav wrapper.
       av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);

       delete[] buf;
    }
    </short></short>

    Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
    wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
    the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.

    Then, especially the line :
    If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).

    After encoding I call av_interleaved_write_frame() to actually write the frame.
    When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.

    Questions :

    • There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder ? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up ?

    • As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this ? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too : https://gist.github.com/62f717bbaa69ac7196be

    • I compiled with —enable-debug=3 and disable optimizations, but I'm not seeing any
      debug information. How can I make libav more verbose ?

    Thanks !