Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4084)

  • Extracting audio from video using Xuggler

    13 février 2014, par Sudh

    I am trying to extract audio(mp3) from a video file (flv), but I keep getting Exceptions :

    05:02:10.326 [AWT-EventQueue-0] ERROR org.ffmpeg - [aac @ 000000000043B3F0] channel element 0.0 is not allocated
    java.lang.IllegalArgumentException : stream[0] is not video

    I tried with this :

    public void runExample(int a) {
       String sourceUrl="F:\\Software\\library\\test1.mp4";
       String destUrl="F:\\Software\\library\\test1.flv";
       IMediaReader reader = null;
       IMediaWriter writer = null;
       try {
           reader = ToolFactory.makeReader(sourceUrl);
           writer = ToolFactory.makeWriter(destUrl, reader);
           reader.addListener(writer);
           int sampleRate = 44100;
           int channels = 1;
           //writer.addAudioStream(0, 0, ICodec.ID.CODEC_ID_MP3, channels, sampleRate);

           while (reader.readPacket() == null) ;
           //Should IMediaReader automatically call close(), only if ERROR_EOF (End of File) is returned from readPacket().
           reader.setCloseOnEofOnly(false);
           //If false the media data will be left in the order in which it is presented to the IMediaWriter.
           //If true IMediaWriter will buffer media data in time stamp order, and only write out data when it has at least one same time or later packet from all streams.
           writer.setForceInterleave(false);
           System.out.println("closed...");
       } catch (Exception ex) {
           ex.printStackTrace();
       }
    }

    Also When I try this :

    public String seperateAudioStream(String pathToAudioFile)
       { String sourceUrl="F:\\Software\\library\\test1.mp4";
           String destUrl="F:\\Software\\library\\test1.mp3";

           IMediaReader reader = ToolFactory.makeReader(sourceUrl);
           reader.open();
           IMediaWriter writer = ToolFactory.makeWriter(destUrl,reader);
           reader.addListener(writer);
           int sampleRate = 44100;
           int channels = 1;
           writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MP3, channels, sampleRate);

           while (reader.readPacket() == null);
           return null;
           IContainer container = IContainer.make();
           int result = container.open(sourceUrl, IContainer.Type.READ, null);
            // check if the operation was successful
             if (result<0)
                 throw new RuntimeException("Failed to open media file");

             int numStreams = container.getNumStreams();

             int audioStreamId = -1;

             IContainer writer = IContainer.make();
             writer.open(destUrl, IContainer.Type.WRITE, IContainerFormat.make());



             for (int i=0; i= 0){

                             if(packet.getStreamIndex() == audioStreamId)
                             {
                                 if(coder.isOpen()){

                                     System.out.println("Writing audio ...");
                                     writer.writePacket(packet);

                                 } else {throw new RuntimeException("Could not open Coder"); }
                             }
                         }
                     }else {throw new RuntimeException("Header not Written for writer container.");}
                 }

                 coder.close();
                 audioCoder.close();
             }
             writer.writeTrailer();
             writer.close();
           return null;

      }

    I get error : ERROR org.ffmpeg channel element 0.0 is not allocated multiple times

    The documentation is unclear to say the least.. xuggler's website looks sick and none of the videos given in tutorial play... even on stack overflow most of the questions related to this are unanswered.

  • Real time compression/encoding using ffmpeg in objective c

    20 février 2014, par halfwaythru

    I have a small application written in Objective-c that looks for the video devices on the machine and allows the user to record video. I need to be able to compress this video stream in real time. I do not want to save the whole video, I want to compress it as much as possible and only write out the compressed version.

    I also don't want to use the AVFoundation's build in compression methods and need to use a third party library like ffmpeg.

    So far, I have been able to record the video and get individual frames using 'AVCaptureVideoDataOutputSampleBufferDelegate' in this method :

    - (void)captureOutput:(AVCaptureOutput *)captureOutput
      didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
      fromConnection:(AVCaptureConnection *)connection

    So I have a stream of images basically, and I want to throw them into ffmpeg (which is all set up on my machine). Do I need to call a terminal command to do this ? And if I do, how do I use the image stack as my input to the ffmpeg command, instead of the video. Also, how do I combine all the little videos in the end ?

    Any help is appreciated. Thanks !

  • Send image and audio data to FFmpeg via named pipes

    5 mai, par Nicke Manarin

    I'm able to send frames one by one to FFmpeg via a name pipe to create a video out of them, but if I try sending audio to a second named pipe, FFmpeg only accepts 1 frame in the frame pipe and starts reading from the audio pipe soon after it.

    


    ffmpeg.exe -loglevel debug -hwaccel auto 
-f:v rawvideo -r 25 -pix_fmt bgra -video_size 782x601 -i \\.\pipe\video_to_ffmpeg 
-f:a s16le -ac 2 -ar 48000 -i \\.\pipe\audio_to_ffmpeg 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -vsync vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\video.mp4"


    


    I start both pipes like so :

    


    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
_imagePipeServer.BeginWaitForConnection(null, null);

_audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
_audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
_audioPipeServer.BeginWaitForConnection(null, null);


    


    And send the data to the pipes using these methods :

    


    public void EncodeFrame(byte[] data)
{
    if (_imagePipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    _imagePipeStreamWriter?.BaseStream.Write(data, 0, data.Length);
}


    


    public void EncodeAudio(ISampleProvider provider, long length)
{
    if (_audioPipeServer?.IsConnected != true)
        throw new FFmpegException("Pipe not connected", Arguments, Output);

    var buffer = new byte[provider.WaveFormat.AverageBytesPerSecond * length / TimeSpan.TicksPerSecond];
    var bytesRead = provider.ToWaveProvider().Read(buffer, 0, buffer.Length);

    if (bytesRead < 1)
        return;

    _audioPipeStreamWriter?.BaseStream.Write(buffer, 0, bytesRead);
    _audioPipeStreamWriter?.BaseStream.Flush();
}


    


    Not sending the audio (and thus not creating the audio pipe) works, with FFmpeg taking one frame at time and creating the video normally.

    


    But if I try sending the audio via a secondary pipe, I can only send one frame. This is the output when that happens (Btw, FFmpeg v7.1) :

    


    Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-hwaccel' ... matched as option 'hwaccel' (use HW accelerated decoding) with argument 'auto'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'rawvideo'.
Reading option '-r' ... matched as option 'r' (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument '25'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'bgra'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '782x601'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\video_to_ffmpeg'.
Reading option '-f:a' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 's16le'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-i' ... matched as input url with argument '\\.\pipe\audio_to_ffmpeg'.
Reading option '-c:v' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'libx264'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'fast'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv420p'.
Reading option '-vf' ... matched as option 'vf' (alias for -filter:v (apply filters to video streams)) with argument 'scale=trunc(iw/2)*2:trunc(ih/2)*2'.
Reading option '-crf' ... matched as AVOption 'crf' with argument '23'.
Reading option '-f:v' ... matched as option 'f' (force container format (auto-detected otherwise)) with argument 'mp4'.
Reading option '-fps_mode' ... matched as option 'fps_mode' (set framerate mode for matching video streams; overrides vsync) with argument 'vfr'.
Reading option '-c:a' ... matched as option 'c' (select encoder/decoder ('copy' to copy stream without reencoding)) with argument 'aac'.
Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '48000'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'C:\Users\user\Desktop\video.mp4' ... matched as output url.
Finished splitting the commandline.

Parsing a group of options: global.
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.

Parsing a group of options: input url \\.\pipe\video_to_ffmpeg.
Applying option hwaccel (use HW accelerated decoding) with argument auto.
Applying option f:v (force container format (auto-detected otherwise)) with argument rawvideo.
Applying option r (override input framerate/convert to given output framerate (Hz value, fraction or abbreviation)) with argument 25.
Applying option pix_fmt (set pixel format) with argument bgra.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\video_to_ffmpeg.
[rawvideo @ 000001c302ee08c0] Opening '\\.\pipe\video_to_ffmpeg' for reading
[file @ 000001c302ee1000] Setting default whitelist 'file,crypto,data'
[rawvideo @ 000001c302ee08c0] Before avformat_find_stream_info() pos: 0 bytes read:65536 seeks:0 nb_streams:1
[rawvideo @ 000001c302ee08c0] All info found
[rawvideo @ 000001c302ee08c0] After avformat_find_stream_info() pos: 1879928 bytes read:1879928 seeks:0 frames:1
Input #0, rawvideo, from '\\.\pipe\video_to_ffmpeg':
  Duration: N/A, start: 0.000000, bitrate: 375985 kb/s
  Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (BGRA / 0x41524742), bgra, 782x601, 0/1, 375985 kb/s, 25 tbr, 25 tbn
Successfully opened the file.

Parsing a group of options: input url \\.\pipe\audio_to_ffmpeg.
Applying option f:a (force container format (auto-detected otherwise)) with argument s16le.
Applying option ac (set number of audio channels) with argument 2.
Applying option ar (set audio sampling rate (in Hz)) with argument 48000.
Successfully parsed a group of options.

Opening an input file: \\.\pipe\audio_to_ffmpeg.
[s16le @ 000001c302ef5380] Opening '\\.\pipe\audio_to_ffmpeg' for reading
[file @ 000001c302ef58c0] Setting default whitelist 'file,crypto,data'


    


    The difference if I try sending 1 frame then some bytes (arbitrary length based on fps) of audio is that I get this extra comment at the end :

    


    [s16le @ 0000025948c96d00] Before avformat_find_stream_info() pos: 0 bytes read:15360 seeks:0 nb_streams:1


    


    Extra calls to EncodeFrame() hang forever at the BaseStream.Write(frameBytes, 0, frameBytes.Length) call, suggesting that FFmpeg is no longer reading the data.

    


    Something is causing FFmpeg to close or stop reading the first pipe and only accept data from the second one.

    


    Perhaps the command is missing something ?

    



    


    🏆 Working solution

    


    I started using two BlockingCollection objects, with the consumers running in separate tasks.

    


    Start the process, setting up the pipes :

    


    private Process? _process;
private NamedPipeServerStream? _imagePipeServer;
private NamedPipeServerStream? _audioPipeServer;
private StreamWriter? _imagePipeStreamWriter;
private StreamWriter? _audioPipeStreamWriter;
private readonly BlockingCollection _videoCollection = new();
private readonly BlockingCollection _audioCollection = new();

private const string ImagePipeName = "video_to_ffmpeg";
private const string AudioPipeName = "audio_to_ffmpeg";
private const string PipeStructure = @"\\.\pipe\"; //This part is only sent to FFmpeg, not to the .NET pipe creation.

public void StartEncoding(string arguments)
{
    _process = new Process
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "path to ffmpeg",
            Arguments = arguments.Replace("{image}", PipeStructure + ImagePipeName).Replace("{audio}", PipeStructure + AudioPipeName),
            RedirectStandardInput = false,
            RedirectStandardOutput = true,
            RedirectStandardError = true,
            UseShellExecute = false,
            CreateNoWindow = true
        }
    };

    StartFramePipeConnection();
    StartAudioPipeConnection();

    _process. Start();
    _process.BeginErrorReadLine();
    _process.BeginOutputReadLine();
}

private void StartFramePipeConnection()
{
    if (_imagePipeServer != null)
    {
        if (_imagePipeServer.IsConnected)
            _imagePipeServer.Disconnect();

        _imagePipeServer.Dispose();
    }

    _imagePipeServer = new NamedPipeServerStream(ImagePipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _imagePipeStreamWriter = new StreamWriter(_imagePipeServer);
    _imagePipeServer.BeginWaitForConnection(VideoPipe_Connected, null);
}

private void StartAudioPipeConnection()
{
    if (_audioPipeServer != null)
    {
        if (_audioPipeServer.IsConnected)
            _audioPipeServer.Disconnect();

        _audioPipeServer.Dispose();
    }

    _audioPipeServer = new NamedPipeServerStream(AudioPipeName, PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous);
    _audioPipeStreamWriter = new StreamWriter(_audioPipeServer);
    _audioPipeServer.BeginWaitForConnection(AudioPipe_Connected, null);
}


    


    Start sending the data as soon as the pipe gets connected. Once the BlockingCollection gets its signal that no more data is going to be sent, it will leave the foreach block and it will wait for the pipe to drain its data.

    


    private void VideoPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var frameBytes in _videoCollection.GetConsumingEnumerable())
            {                    
                _imagePipeStreamWriter?.BaseStream.Write(frameBytes, 0, frameBytes.Length);
            }

            _imagePipeServer?.WaitForPipeDrain();
            _imagePipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}

private void AudioPipe_Connected(IAsyncResult ar)
{
    Task.Run(() =>
    {
        try
        {
            foreach (var audioChunk in _audioCollection.GetConsumingEnumerable())
            {
                _audioPipeStreamWriter?.BaseStream.Write(audioChunk, 0, audioChunk.Length);
            }

            _audioPipeServer?.WaitForPipeDrain();
            _audioPipeStreamWriter?.Close();
        }
        catch (Exception e)
        {
            //Logging
            throw;
        }
    });
}


    


    You can send the image and audio data as soon as the BlockingCollections are initiated, no need to wait for the pipes to connect.

    


    public void EncodeImage(byte[] data)
{
    _videoCollection.Add(data);
}

public void EncodeAudio(ISampleProvider provider, long length)
{
    var sampleCount = (int)(provider.WaveFormat.SampleRate * ((double)length / TimeSpan.TicksPerSecond) * provider.WaveFormat.Channels);
    var floatBuffer = new float[sampleCount];

    var samplesRead = provider.Read(floatBuffer, 0, sampleCount);

    if (samplesRead < 1)
        return 0;

    var byteBuffer = new byte[samplesRead * 4]; //4 bytes per float, f32le.
    Buffer.BlockCopy(floatBuffer, 0, byteBuffer, 0, byteBuffer.Length);

    
    _audioCollection.Add(byteBuffer);
}


    


    Once you finished producing data, make sure to signal the BlockingCollections :

    


    public void FinishEncoding()
{
    //Signal the end of video/audio producer.
    _videoCollection.CompleteAdding();
    _audioCollection.CompleteAdding();

    //Waits for 20 seconds for encoding to finish.
    _process?.WaitForExit(20_000);
}


    


    The FFmpeg arguments were changed slightly :

    


    -loglevel trace -hwaccel auto 
-f:v rawvideo -probesize 32 -r 25 -pix_fmt bgra -video_size 1109x627 -i {image} 
-f:a f32le -ac 2 -ar 48000 -probesize 32 -i {audio} 
-c:v libx264 -preset fast -pix_fmt yuv420p 
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 23 -f:v mp4 -fps_mode vfr 
-c:a aac -b:a 128k -ar 48000 -ac 2 
-y "C:\Users\user\Desktop\Video.mp4"