Recherche avancée

Médias (91)

Autres articles (47)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (9828)

  • FFMPEG overlay issue for 2 videos, sound missing from second input

    27 juin 2021, par Abhishek Singh

    I'm trying to achieve an overlay with ffmpeg which take the two flv files as input and make a output and both videos playing simultaneously. things are working fine with below ffmpeg command :

    


    ffmpeg -i input1.flv -vf "[in] scale=359:320, pad=2*iw+6:ih [left]; movie=input2.flv, scale=359:320 [right]; [left][right] overlay=365:0 [out]" -b:v 3600k -y output.flv


    


    But issue is with the sound of second video and that is missing from the output.flv, only input1 sound is available in the output.flv.

    


    Console output is :

    


    ffmpeg version N-47062-g26c531c Copyright (c) 2000-2012 the FFmpeg developers
  built on Nov 25 2012 12:23:20 with gcc 4.7.2 (GCC)
  configuration: --disable-static --enable-shared --enable-gpl --enable-version3
 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib
--enable-frei0r --enable-libass --enable-lib`enter code here`opencore-amrnb --enable-libopencore-
amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut -
-enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger -
-enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc --
enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab
le-libxavs --enable-libxvid --enable-zlib
  libavutil      52.  9.100 / 52.  9.100
  libavcodec     54. 77.100 / 54. 77.100
  libavformat    54. 37.100 / 54. 37.100
  libavdevice    54.  3.100 / 54.  3.100
  libavfilter     3. 23.102 /  3. 23.102
  libswscale      2.  1.102 /  2.  1.102
  libswresample   0. 17.101 /  0. 17.101
  libpostproc    52.  2.100 / 52.  2.100
Input #0, flv, from 'input1.flv':
  Metadata:
    canSeekToEnd    : false
    createdby       : FMS 4.0
    creationdate    : Mon Jan 07 07:05:40 2013
    encoder         : Lavf54.37.100
  Duration: 00:03:04.81, start: 0.000000, bitrate: 314 kb/s
    Stream #0:0: Video: flv1, yuv420p, 160x120, 3600 kb/s, 1k tbr, 1k tbn, 1k tb
c
    Stream #0:1: Audio: mp3, 22050 Hz, mono, s16, 32 kb/s
Output #0, flv, to 'output.flv':
  Metadata:
    canSeekToEnd    : false
    createdby       : FMS 4.0
    creationdate    : Mon Jan 07 07:05:40 2013
    encoder         : Lavf54.37.100
    Stream #0:0: Video: flv1 ([2][0][0][0] / 0x0002), yuv420p, 724x320, q=2-31,
3600 kb/s, 1k tbn, 1k tbc
    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 22050 Hz, mono, s16p
Stream mapping:
  Stream #0:0 -> #0:0 (flv -> flv)
  Stream #0:1 -> #0:1 (mp3 -> libmp3lame)
Press [q] to stop, [?] for help
frame=  120 fps=0.0 q=4.5 size=     550kB time=00:00:08.57 bitrate= 525.3kbits/s
frame=  225 fps=221 q=2.0 size=    1422kB time=00:00:18.26 bitrate= 637.6kbits/s
Buffer queue overflow, dropping.
[Parsed_overlay_4 @ 02259ae0] Buffer queue overflow, dropping.
    Last message repeated 1 times
frame=  338 fps=222 q=2.0 size=    2414kB time=00:00:26.97 bitrate= 733.0kbits/s
Buffer queue overflow, dropping.
[Parsed_overlay_4 @ 02259ae0] Buffer queue overflow, dropping.
    Last message repeated 12 times
frame=  452 fps=223 q=2.0 size=    3326kB time=00:00:35.38 bitrate= 770.1kbits/s
frame=  562 fps=223 q=2.0 size=    4246kB time=00:00:43.15 bitrate= 806.0kbits/s
Buffer queue overflow, dropping.
[Parsed_overlay_4 @ 02259ae0] Buffer queue overflow, dropping.
    Last message repeated 11 times
frame=  669 fps=221 q=2.0 size=    5237kB time=00:00:51.28 bitrate= 836.6kbits/s
frame=  777 fps=220 q=2.0 size=    6105kB time=00:00:58.75 bitrate= 851.2kbits/s
frame=  893 fps=222 q=2.0 size=    6897kB time=00:01:06.72 bitrate= 846.8kbits/s
frame= 1006 fps=222 q=2.0 size=    7701kB time=00:01:14.60 bitrate= 845.6kbits/s
frame= 1121 fps=223 q=2.0 size=    8539kB time=00:01:22.45 bitrate= 848.3kbits/s
frame= 1235 fps=223 q=2.0 size=    9316kB time=00:01:30.39 bitrate= 844.3kbits/s
frame= 1344 fps=223 q=2.0 size=   10135kB time=00:01:37.98 bitrate= 847.3kbits/s
Buffer queue overflow, dropping.
[Parsed_overlay_4 @ 02259ae0] Buffer queue overflow, dropping.
    Last message repeated 33 times
frame= 1437 fps=220 q=2.0 size=   10800kB time=00:01:46.67 bitrate= 829.3kbits/s
frame= 1540 fps=219 q=2.0 size=   11577kB time=00:01:54.16 bitrate= 830.7kbits/s
frame= 1651 fps=219 q=2.0 size=   12330kB time=00:02:01.64 bitrate= 830.3kbits/s
frame= 1756 fps=218 q=2.0 size=   13141kB time=00:02:09.06 bitrate= 834.1kbits/s
frame= 1859 fps=217 q=2.0 size=   13879kB time=00:02:16.28 bitrate= 834.3kbits/s
frame= 1962 fps=217 q=2.0 size=   14703kB time=00:02:23.69 bitrate= 838.2kbits/s
frame= 2070 fps=217 q=2.0 size=   15448kB time=00:02:30.98 bitrate= 838.2kbits/s
frame= 2176 fps=216 q=2.0 size=   16241kB time=00:02:38.46 bitrate= 839.6kbits/s
Buffer queue overflow, dropping.
[Parsed_overlay_4 @ 02259ae0] Buffer queue overflow, dropping.
    Last message repeated 20 times
frame= 2275 fps=215 q=2.0 size=   16990kB time=00:02:46.78 bitrate= 834.5kbits/s
frame= 2389 fps=216 q=1.6 size=   17784kB time=00:02:54.44 bitrate= 835.1kbits/s
frame= 2493 fps=216 q=2.0 size=   18555kB time=00:03:01.73 bitrate= 836.4kbits/s
frame= 2534 fps=216 q=2.0 Lsize=   18945kB time=00:03:04.81 bitrate= 839.7kbits/
s
video:18227kB audio:588kB subtitle:0 global headers:0kB muxing overhead 0.691204
%


    


    I think -map is filter that is for the audio handling of both video. see this link.

    


    Superimposing two videos onto a static image ?

    


  • Screeching white sound coming while playing audio as a raw stream

    27 avril 2020, par Sri Nithya Sharabheshwarananda

    I. Background

    



      

    1. I am trying to make an application which helps to match subtitles to the audio waveform very accurately at the waveform level, at the word level or even at the character level.
    2. 


    3. The audio is expected to be Sanskrit chants (Yoga, rituals etc.) which are extremely long compound words [ example - aṅganyā-sokta-mātaro-bījam is traditionally one word broken only to assist reading ]
    4. 


    5. The input transcripts / subtitles might be roughly in sync at the sentence/verse level but surely would not be in sync at the word level.
    6. 


    7. The application should be able to figure out points of silence in the audio waveform, so that it can guess the start and end points of each word (or even letter/consonant/vowel in a word), such that the audio-chanting and visual-subtitle at the word level (or even at letter/consonant/vowel level) perfectly match, and the corresponding UI just highlights or animates the exact word (or even letter) in the subtitle line which is being chanted at that moment, and also show that word (or even the letter/consonant/vowel) in bigger font. This app's purpose is to assist learning Sanskrit chanting.
    8. 


    9. It is not expected to be a 100% automated process, nor 100% manual but a mix where the application should assist the human as much as possible.
    10. 


    



    II. Following is the first code I wrote for this purpose, wherein

    



      

    1. First I open a mp3 (or any audio format) file,
    2. 


    3. Seek to some arbitrary point in the timeline of the audio file // as of now playing from zero offset
    4. 


    5. Get the audio data in raw format for 2 purposes - (1) playing it and (2) drawing the waveform.
    6. 


    7. Playing the raw audio data using standard java audio libraries
    8. 


    



    III. The problem I am facing is, between every cycle there is screeching sound.

    



      

    • Probably I need to close the line between cycles ? Sounds simple, I can try.
    • 


    • But I am also wondering if this overall approach itself is correct ? Any tip, guide, suggestion, link would be really helpful.
    • 


    • Also I just hard coded the sample-rate etc ( 44100Hz etc. ), are these good to set as default presets or it should depend on the input format ?
    • 


    



    IV. Here is the code

    



    import com.github.kokorin.jaffree.StreamType;
import com.github.kokorin.jaffree.ffmpeg.FFmpeg;
import com.github.kokorin.jaffree.ffmpeg.FFmpegProgress;
import com.github.kokorin.jaffree.ffmpeg.FFmpegResult;
import com.github.kokorin.jaffree.ffmpeg.NullOutput;
import com.github.kokorin.jaffree.ffmpeg.PipeOutput;
import com.github.kokorin.jaffree.ffmpeg.ProgressListener;
import com.github.kokorin.jaffree.ffprobe.Stream;
import com.github.kokorin.jaffree.ffmpeg.UrlInput;
import com.github.kokorin.jaffree.ffprobe.FFprobe;
import com.github.kokorin.jaffree.ffprobe.FFprobeResult;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;


public class FFMpegToRaw {
    Path BIN = Paths.get("f:\\utilities\\ffmpeg-20190413-0ad0533-win64-static\\bin");
    String VIDEO_MP4 = "f:\\org\\TEMPLE\\DeviMahatmyamRecitationAudio\\03_01_Devi Kavacham.mp3";
    FFprobe ffprobe;
    FFmpeg ffmpeg;

    public void basicCheck() throws Exception {
        if (BIN != null) {
            ffprobe = FFprobe.atPath(BIN);
        } else {
            ffprobe = FFprobe.atPath();
        }
        FFprobeResult result = ffprobe
                .setShowStreams(true)
                .setInput(VIDEO_MP4)
                .execute();

        for (Stream stream : result.getStreams()) {
            System.out.println("Stream " + stream.getIndex()
                    + " type " + stream.getCodecType()
                    + " duration " + stream.getDuration(TimeUnit.SECONDS));
        }    
        if (BIN != null) {
            ffmpeg = FFmpeg.atPath(BIN);
        } else {
            ffmpeg = FFmpeg.atPath();
        }

        //Sometimes ffprobe can't show exact duration, use ffmpeg trancoding to NULL output to get it
        final AtomicLong durationMillis = new AtomicLong();
        FFmpegResult fFmpegResult = ffmpeg
                .addInput(
                        UrlInput.fromUrl(VIDEO_MP4)
                )
                .addOutput(new NullOutput())
                .setProgressListener(new ProgressListener() {
                    @Override
                    public void onProgress(FFmpegProgress progress) {
                        durationMillis.set(progress.getTimeMillis());
                    }
                })
                .execute();
        System.out.println("audio size - "+fFmpegResult.getAudioSize());
        System.out.println("Exact duration: " + durationMillis.get() + " milliseconds");
    }

    public void toRawAndPlay() throws Exception {
        ProgressListener listener = new ProgressListener() {
            @Override
            public void onProgress(FFmpegProgress progress) {
                System.out.println(progress.getFrame());
            }
        };

        // code derived from : https://stackoverflow.com/questions/32873596/play-raw-pcm-audio-received-in-udp-packets

        int sampleRate = 44100;//24000;//Hz
        int sampleSize = 16;//Bits
        int channels   = 1;
        boolean signed = true;
        boolean bigEnd = false;
        String format  = "s16be"; //"f32le"

        //https://trac.ffmpeg.org/wiki/audio types
        final AudioFormat af = new AudioFormat(sampleRate, sampleSize, channels, signed, bigEnd);
        final DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
        final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);

        line.open(af, 4096); // format , buffer size
        line.start();

        OutputStream destination = new OutputStream() {
            @Override public void write(int b) throws IOException {
                throw new UnsupportedOperationException("Nobody uses thi.");
            }
            @Override public void write(byte[] b, int off, int len) throws IOException {
                String o = new String(b);
                boolean showString = false;
                System.out.println("New output ("+ len
                        + ", off="+off + ") -> "+(showString?o:"")); 
                // output wave form repeatedly

                if(len%2!=0) {
                    len -= 1;
                    System.out.println("");
                }
                line.write(b, off, len);
                System.out.println("done round");
            }
        };

        // src : http://blog.wudilabs.org/entry/c3d357ed/?lang=en-US
        FFmpegResult result = FFmpeg.atPath(BIN).
            addInput(UrlInput.fromPath(Paths.get(VIDEO_MP4))).
            addOutput(PipeOutput.pumpTo(destination).
                disableStream(StreamType.VIDEO). //.addArgument("-vn")
                setFrameRate(sampleRate).            //.addArguments("-ar", sampleRate)
                addArguments("-ac", "1").
                setFormat(format)              //.addArguments("-f", format)
            ).
            setProgressListener(listener).
            execute();

        // shut down audio
        line.drain();
        line.stop();
        line.close();

        System.out.println("result = "+result.toString());
    }

    public static void main(String[] args) throws Exception {
        FFMpegToRaw raw = new FFMpegToRaw();
        raw.basicCheck();
        raw.toRawAndPlay();
    }
}



    



    Thank You

    


  • Adding background audio in FFMPEG, but quieting it if there is sound in another channel

    9 septembre 2021, par Connor Bell

    I'm appending several videos together with FFMPEG. Some of these videos have accompanying audio, and some do not.

    


    I'd like to add music in the background, and have found out how to do so from here. However, I'd like the background music to be (let's say 80%) quieter if there is already audio in that video. Note that all videos have a null audio track, so just checking for the existence of an audio track isn't sufficient.

    


    My current process is :

    


      

    • Take source videos, add a null audio source and upscale (The null audio source is required for ffmpeg-concat to work due to a bug, I think)
    • 


    • Combine the videos using ffmpeg-concat
    • 


    


    Preferably, adding the background music should be the third step, as splitting the background music prior to combining the videos sounds more complex, but I may be wrong.