Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (100)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (10340)

  • best approach for extracting image sequence from video file in Java

    13 mai 2013, par Daniel Ruf

    Well, there is FFMPEG and some Java bindings and wrappers for it but I need to distribute for each specific platform the right binary file of FFMPEG.

    Isnt there any plain Java solution or library without any dependencies like FFMPEG for converting a video fle to an image sequence ?

    Solutions like FFMPEG, XUGGLER or JMF (abandoned) are not suitable. Is there really no pure Java solution for this ?

    Maybe for specific video codecs / files at least ?

    I just want to extract the images from the video file to jpeg / png files and save them to the disk

  • LibAV - what approach to take for realtime audio and video capture ?

    26 juillet 2012, par pollux

    I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
    all fine and I've streamed for more then 48 hours w/o any problems ! My next step
    is to add audio to the stream. I'll be capturing live audio and I want to encode it
    in real time using speex, mp3 or nelly moser.

    Background info

    I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
    and I have 2 channels. Because I might be mixing terminology, this is how I use the
    data :

    // input = array with audio samples
    // bufferSize = 256
    // nChannels = 2
    void audioIn(float * input, int bufferSize, int nChannels) {
       // convert from float to S16
           short* buf = new signed short[bufferSize * 2];
       for(int i = 0; i < bufferSize; ++i) {  // loop over all samples
           int dx = i * 2;
           buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max();  // convert frame  of the first channel
           buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max();  // convert frame  of the second channel
       }

           // add this to the libav wrapper.
       av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);

       delete[] buf;
    }
    </short></short>

    Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
    wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
    the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.

    Then, especially the line :
    If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).

    After encoding I call av_interleaved_write_frame() to actually write the frame.
    When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.

    Questions :

    • There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder ? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up ?

    • As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this ? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too : https://gist.github.com/62f717bbaa69ac7196be

    • I compiled with —enable-debug=3 and disable optimizations, but I'm not seeing any
      debug information. How can I make libav more verbose ?

    Thanks !

  • Mix Audio tracks with offset in SOX

    4 août 2012, par Laramie

    From ASP.Net, I am using FFMPEG to convert flv files on a Flash Media Server to wavs that I need to mix into a single MP3 file. I originally attempted this entirely with FFMPEG but eventually gave up on the mixing step because I don't believe it it possible to combine audio only tracks into a single result file. I would love to be wrong.

    I am now using FFMPEG to access the FLV files and extract the audio track to wav so that SOX can mix them. The problem is that I must offset one of the audio tracks by a few seconds so that they are synchronized. Each file is one half of a conversation between a student and a teacher. For example teacher.wav might need to begin 3.3 seconds after student.wav. I can only figure out how to mix the files with SOX where both tracks begin at the same time.

    My best attempt at this point is :

    ffmpeg -y -i rtmp://server/appName/instance/student.flv -ac 1 student.wav
    ffmpeg -y -i rtmp://server/appName/instance/teacher.flv -ac 1 teacher.wav

    sox -m student.wav teacher.wav combined.mp3 splice 3.3

    These tools (FFMEG/SoX) were chosen based on my best research, but are not required. Any working solution would allow an ASP.Net service to input the two FMS flvs and create a combined MP3 using open-source or free tools.

    EDIT :
    I was able to offset the files using the delay switch in SOX.

    sox -M student.wav teacher.wav combined.mp3 delay 2.8

    I'm leaving the question open in case someone has a better approach than the combined FFMPEG/SOX solution.