Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (35)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (6890)

  • FFmpeg - 2 combined video's (splitscreen) removes sound

    10 juillet 2014, par Joey

    I am stuck on this problem where i am trying to put 2 video’s next to each other, like splitscreen. Found the command to do this here on StackOverflow but it removes my sound. I can’t figure out how to keep the sound from the 2 video’s.

    The command i use :

    exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' -map [vid] -c:v libx264 -crf 23 /tmp/output_file.flv", $output, $return);

    The output is exactly how i want but i want the two sound streams too.

    EDIT :

    For anyone having the same problem, i ended up doing this in 3 steps :

    # Combine the two audio streams into 1 temp file
    exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex amix=inputs=2:duration=first:dropout_transition=2 ".$outputSound, $output, $return);

    # Set the two video's as splitscreen
    exec("ffmpeg -i ".$video1." -i ".$video2." -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' -map [vid] -c:v libx264 -crf 23 ".$outputVideo, $output, $return);

    # Combine merged audio file with splitscreen video
    exec("ffmpeg -i ".$outputVideo." -i ".$outputSound." -map 0 -map 1 -codec copy -shortest ".$outputCombined, $output, $return);

    Solved !

  • samplerate conversion function fails to produce an audible sound but only small pieces of audio

    2 juillet 2014, par user3749290

    playmp3() using libmpg123

    if (isPaused==0 && mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
    {
       char * resBuffer=&buffer[0]; //22100=0,5s
       buffer = resample(resBuffer,22050,22050); // I think the result is 1/2 of audio speed
       if((ao_play(dev, (char*)buffer, done)==0)){
           return 1;
    }

    resample() Using avcodec from ffmpeg

    #define LENGTH_MS 500       // how many milliseconds of speech to store 0,5s:x=1:44100 x=22050 sample to store
    #define RATE 44100      // the sampling rate (input)
    #define FORMAT PA_SAMPLE_S16NE  // sample size: 8 or 16 bits
    #define CHANNELS 2      // 1 = mono 2 = stereo

    struct AVResampleContext* audio_cntx = 0;
    //(LENGTH_MS*RATE*16*CHANNELS)/8000

       void resample(char in_buffer[],int out_rate,int nsamples,char out_buffer[])
       {
           //char out_buffer[ sizeof( in_buffer ) * 4];
           audio_cntx = av_resample_init( out_rate, //out rate
               RATE, //in rate
               16, //filter length
               10, //phase count
               0, //linear FIR filter
               1.0 ); //cutoff frequency
           assert( audio_cntx && "Failed to create resampling context!");
           int samples_consumed;
           //*out_buffer = malloc(sizeof(in_buffer));
           int samples_output = av_resample( audio_cntx, //resample context
               (short*)out_buffer, //buffout
               (short*)in_buffer,  //buffin
               &samples_consumed,  //&consumed
               nsamples,       //nb_samples
               sizeof(out_buffer)/2,//lenout sizeof(out_buffer)/2
               0);//is_last
           assert( samples_output > 0 && "Error calling av_resample()!" );
           av_resample_close( audio_cntx );    
       }

    When I run this code, the application part, the problem is that I hear the sound jerky, why ?
    The size of the array I think is right, I calculated considering that in the second half should be 22050 samples from store.

  • speex decoding make wrong sound (FFmpeg on iOS)

    2 juillet 2014, par user3796700

    I’m trying to use FFmpeg on iOS to play live streams.
    One with NellyMoser, as below : (success)

    avformat_open_input(&formatContext, "rtmp://my/nellymoser/stream/url", NULL, &options);

    Now I tried to play the same stream but encoded in Speex format.
    So I followed some tutorials, compiled "ogg.a, speex.a, speexdsp.a" for iOS ;
    Then re-compiled FFmpeg, linking to those three .a files.

    However the output is wrong :

    avformat_open_input(&formatContext, "rtmp://my/speex/stream/url", NULL, &options);

    The output sound is 2x faster than normal. It seems like only half of the data is decoded.

    Does anyone have tried similar things before ?
    I’ve stocked several days, really need help here..
    Thanks !

    As for reference, here is how I compile FFmpeg :

    ./configure \
    --enable-libspeex \
    --disable-doc \
    --disable-ffmpeg \
    --disable-ffserver \
    --enable-cross-compile \
    --arch=armv7 \
    --target-os=darwin \
    --cc=clang \
    --as='gas-preprocessor/gas-preprocessor.pl clang' \
    --sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.1.sdk \
    --cpu=cortex-a8 \
    --extra-cflags='-arch armv7 -I ../speex/armv7/include -I ../libogg/armv7/include' \
    --extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.1.sdk' \
    --extra-ldflags='-L ../speex/armv7/lib -lspeexdsp -lspeex -L ../libogg/armv7/lib -logg' \
    --enable-pic \
    --prefix=/Users/chienlo/Desktop/speexLibrary/ffmpeg-2.2.4/armv7