Recherche avancée

Médias (1)

Mot : - Tags -/géodiversité

Autres articles (39)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6514)

  • Pipe FFmpeg to S3

    23 novembre 2019, par Garret Harp

    So I am using an aws lambda function that downloads a few files and concats them together. I have found that the actual ffmpeg command is super fast like maybe 6-10s for a 240mb file in the end. The most time-consuming thing in my function is saving the actual end result video to s3. I really want to speed this up as much as possible.

    My current ffmpeg command (fluent-ffmpeg on node) :

    (f = a text file of multiple .ts files that have been downloaded locally)

    (temp_mp4 = a tempory mp4 file that gets saved on the system then uploaded to s3)

    ffmpeg()
       .addInput(f)
       .inputFormat('concat')
       .addInputOptions([ '-safe 0', '-protocol_whitelist file' ])
       .addOutputOptions([ '-y', '-codec copy' ])
       .output(temp_mp4)
       .on('error', (err) => {
           console.error('ffmpeg err', err)
       })
       .on('end', () => {
           resolve()
       })
       .run()

    My upload to s3 function is then :

    function saveVideo (video, filename) {
       const params = {
           Bucket: '',
           Key: filename,
           Body: fs.createReadStream(video),
           ACL: 'public-read'
       }

       return s3.putObject(params).promise()
    }

    Depending on the exact file size this can take like 20-30s while the ffmpeg command is 6-10s.

    My best thought was being able to somehow directly pipe the ffmpeg output to s3, is this possible ?

  • Create a pipe of input and output for wav to mp3 encoding

    28 octobre 2020, par loretoparisi

    I have to pipe a wave data stream to ffmpeg in Python. I can easily create an output pipe from an input mp3 file like :

    



           process = (
            ffmpeg
            .input(path)
            .output('pipe:', **output_kwargs)
            .run_async(pipe_stdout=True, pipe_stderr=True))
        buffer, _ = process.communicate()
        # because of we need (n_channels, samples)
        waveform = np.frombuffer(buffer, dtype='

    



    Here waveform will contain the wave audio file.

    



    Now, I want to pipe the same data, but from an input stream, but for some reason it does not work as expected :

    



        # data shape is like (9161728, 2) for two channels audio data
    input_kwargs = {'ar': sample_rate, 'ac': data.shape[1]} 
    output_kwargs = {'ar': sample_rate, 'strict': '-2'}
    n_channels = 2
    process = (
        ffmpeg
        .input('pipe:', format='f32le', **input_kwargs)
        .output('pipe:', **output_kwargs)
        .run_async(pipe_stdin=True, quiet=True))
    buffer, err = process.communicate(input=data.astype('code>

    



    The output buffer is empty here after getting the results from process.communicate, while err is

    



    Unable to find a suitable output format for 'pipe:'\npipe:: Invalid argument\n"


    


  • Extract individual frames from video and pipe them to StandardOutput in FFmpeg

    13 novembre 2019, par Nicke Manarin

    I’m trying to extract frames from a video using FFmpeg. But instead of letting FFmpeg write the files to disk, I’m trying to get the frames directly from StandardOutput.

    I’m not sure if it’s feasible. I’m expecting to get each frame individually as they get decoded by reading and waiting until all frames are extracted.

    With the current code, I think that I’m getting all frames at once.


    Command

    ffmpeg -i "C:\video.mp4" -r 30 -ss 00:00:10.000 -to 00:01:20.000 -hide_banner -c:v png -f image2pipe -

    Code

    var start = TimeSpan.FromMilliseconds(SelectionSlider.LowerValue);
    var end = TimeSpan.FromMilliseconds(SelectionSlider.UpperValue);

    var info = new ProcessStartInfo(UserSettings.All.FfmpegLocation)
    {
       Arguments = $" -i \"{VideoPath}\" -r {fps} -ss {start:hh\\:mm\\:ss\\.fff} " +
           "-to {end:hh\\:mm\\:ss\\.fff} -hide_banner -c:v png -f image2pipe -",
       CreateNoWindow = true,
       ErrorDialog = false,
       UseShellExecute = false,
       RedirectStandardError = true,
       RedirectStandardOutput = true
    };

    var process = new Process();
    process.StartInfo = info;
    process.Start();

    while (!process.StandardOutput.EndOfStream)
    {
       if (_cancelled)
       {
           process.Kill();
           return;
       }

       //This returns me the entire byte array, of all frames.
       var bytes = default(byte[]);
       using (var memstream = new MemoryStream())
       {
           process.StandardOutput.BaseStream.CopyTo(memstream);
           bytes = memstream.ToArray();
       }
    }

    I also tried to use process.BeginOutputReadLine() and wait for each frame in OutputDataReceived. But it returns parts of each frame, like the 10 first bytes, than other 50 bytes, it’s erratic.

    Is there any way to get the frames separately via the output stream ?