Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10244)

  • How can I run command line FFMPEG and accept multiple pipes (video and audio) without blocking on the first input ?

    18 février 2016, par Version135b

    I’m trying to mux h264 and aac created with MediaCodec using FFMPEG, and also use FFMPEG’s RTMP support to send to youtube. I’ve created two pipes, and am writing from java (android) through WriteableByteChannels. I can send to one pipe just fine (accepting null audio) like this :

    ./ffmpeg -f lavfi -i aevalsrc=0 -i "files/camera-test.h264" -acodec aac -vcodec copy -bufsize 512k -f flv "rtmp://a.rtmp.youtube.com/live2/XXXX"

    YouTube streaming works perfectly (but I have no audio). Using two pipes this is my command :

    ./ffmpeg \
    -i "files/camera-test.h264" \
    -i "files/audio-test.aac" \
    -vcodec copy \
    -acodec copy \
    -map 0:v:0 -map 1:a:0 \
    -f flv "rtmp://a.rtmp.youtube.com/live2/XXXX""

    The pipes are created with mkfifo , and opened from java like this :

    pipeWriterVideo = Channels.newChannel(new FileOutputStream(outputFileVideo.toString()));

    The order of execution (for now in my test phase) is creation of the files, starting ffmpeg (through adb shell) and then starting recording which opens the channels. ffmpeg will immediately open the h264 stream and then wait, since it is reading from the pipe the first channel open (for video) will successfully run. When it comes to trying to open the audio the same way, it fails because ffmpeg has not actually started reading from the pipe. I can open a second terminal window and cat the audio file and my app spits out what i hope is encoded aac, but ffmpeg fails, usually just sitting there waiting. Here is the verbose output :

    ffmpeg version N-78385-g855d9d2 Copyright (c) 2000-2016 the FFmpeg
    developers
     built with gcc 4.8 (GCC)
     configuration: --prefix=/home/dev/svn/android-ffmpeg-with-rtmp/src/ffmpeg/android/arm
       --enable-shared --disable-static --disable-doc --disable-ffplay
       --disable-ffprobe --disable-ffserver --disable-symver
       --cross-prefix=/home/dev/dev/android-ndk-r10e/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-
       --target-os=linux --arch=arm --enable-cross-compile
       --enable-librtmp --enable-pic --enable-decoder=h264
       --sysroot=/home/dev/dev/android-ndk-r10e/platforms/android-19/arch-arm
       --extra-cflags='-Os -fpic -marm'
       --extra-ldflags='-L/home/dev/svn/android-ffmpeg-with-rtmp/src/openssl-android/libs/armeabi '
       --extra-ldexeflags=-pie --pkg-config=/usr/bin/pkg-config
     libavutil      55. 17.100 / 55. 17.100
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
    matched as AVOption 'debug' with argument 'verbose'.
    Trailing options were found on the commandline.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option async (audio sync method) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input file files/camera-test.h264.
    Successfully parsed a group of options.
    Opening an input file: files/camera-test.h264.
    [file @ 0xb503b100] Setting default whitelist 'file'

    I think if I could just get ffmpeg to start listening to both pipes, the rest would work out !

    Thanks for your time.

    EDIT :
    I’ve made progress by decoupling the audio pipe connection and encoding, but now as soon as the video stream has been passed it errors on audio. I started a separate thread to create the WriteableByteChannel for audio and it never gets passed the FileOutputStream creation.

    matched as AVOption 'debug' with argument 'verbose'.
    Trailing options were found on the commandline.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Successfully parsed a group of options.
    Parsing a group of options: input file files/camera-test.h264.
    Successfully parsed a group of options.
    Opening an input file: files/camera-test.h264.
    [file @ 0xb503b100] Setting default whitelist 'file'
    [h264 @ 0xb503c400] Format h264 probed with size=2048 and score=51
    [h264 @ 0xb503c400] Before avformat_find_stream_info() pos: 0 bytes read:15719 seeks:0
    [h264 @ 0xb5027400] Current profile doesn't provide more RBSP data in PPS, skipping
    [h264 @ 0xb503c400] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
    [h264 @ 0xb503c400] After avformat_find_stream_info() pos: 545242 bytes read:546928 seeks:0 frames:127
    Input #0, h264, from 'files/camera-test.h264':
     Duration: N/A, bitrate: N/A
       Stream #0:0, 127, 1/1200000: Video: h264 (Baseline), 1 reference frame, yuv420p(left), 854x480 (864x480), 1/50, 25 fps, 25 tbr, 1200k tbn, 50 tbc
    Successfully opened the file.
    Parsing a group of options: input file files/audio-test.aac.
    Applying option vcodec (force video codec ('copy' to copy stream)) with argument copy.
    Successfully parsed a group of options.
    Opening an input file: files/audio-test.aac.
    Unknown decoder 'copy'
    [AVIOContext @ 0xb5054020] Statistics: 546928 bytes read, 0 seeks

    Here is where I attempt to open the audio pipe.

    new Thread(){
        public void run(){
             Log.d("Audio", "pre thread");
             FileOutputStream fs = null;
             try {
                  fs = new FileOutputStream("/data/data/android.com.android.grafika/files/audio-test.aac");
             } catch (FileNotFoundException e) {
                  e.printStackTrace();
             }
             Log.d("Audio", "made fileoutputstream");  //never hits here
             mVideoEncoder.pipeWriterAudio = Channels.newChannel(fs);
             Log.d("Audio", "made it past opening audio pipe");
        }
    }.start();

    Thanks.

  • How can combine two separate scripts being piped together to make one script instead of two ?

    27 mars 2016, par user556068

    For the past couple hours I’ve been banging my head against the wall trying to figure out something I thought would be simple. Maybe it is but it’s beyond me at the moment. So I have now two scripts. Originallly they were part of the same but I could never make it work how it should. So the first part uses curl to download a file from a site. Then using grep and sedto filter out the text I need which is then put into a plain text file as a long list of website urls ; one per line. The last part of the 1st script calls on youtube -dl to read the batch file in order to obtain the web addresses where the actual content is located. I hope that makes sense.

    youtube-dl reads the batch file and outputs a new list urls into the terminal. This second list is not saved to file because it doesn’t need to be. These urls change from day to day or hour to hour. Using the read command, these urls are then passed to ffmpeg using a predetermined set of arguments for the input and output. Ffmpeg is executed on every url it receives and runs quietly in the background.

    The first paragraph describes script1.sh and paragraph 2 obviously describes script2.sh. When I pipe them together like script1.sh | script2.sh it works better than I ever thought possible. Maybe i’m nitpicking at this point but the idea is to have 1 unified script. For the moment I have simplified it by adding an alias to my .bash_profile.

    Here are the last two commands of script1.

    sed 's/\"\,/\//g' > "$HOME/file2.txt";
    cat $HOME/file2.txt | youtube-dl --ignore-config -iga -

    The trailing - allows youtube-dl to read from stdin.

    The second part of the script ; what I’m calling script2 at this point begins with

    while read -r input
    do
    ffmpeg [arg] [input] [arg2] [output]

    What am i not seeing that is causing the script to hang when the two halves are combined yet work perfectly if one is piped into the other ?

  • ffmpeg, stretch audio to x seconds

    9 mai 2016, par Max Doumit

    I am trying to make an audio file be exactly x second.

    So far i tried using the atempo filter by doing the following calculation

    Audio length / desired length = atempo.

    But this is not accurate, and I am having to tweak the tempo manually to get it to an exact fit.

    Are there any other solutions to get this work ? Or am I doing this incorrectly ?

    My original file is a wav file, and my output in an mp3

    Here is a sample command

    ffmpeg -i input.wav -codec:a libmp3lame -filter:a "atempo=0.9992323" -b:a 320K output.mp3

    UPDATE.

    I was able to correctly calculate the tempo by changing the way I am receiving the audio length.

    I am now calculating the current audio length using the actual file size and the sample rate.

    Audio Length = file size / (sample rate * 2)

    Sample rate is something like 16000 Hz. You can get that by using ffprob or ffmpeg.

    EDIT - Solved

    Figured it out.
    I was calculating the tempo incorrectly.

    Audio length / desired length = atempo

    Should be

    desired length / Audio length = atempo

    I had to do a bit more reading on what the tempo actually does. Hope this helps someone.