
Advanced search
Medias (1)
-
SPIP - plugins - embed code - Exemple
2 September 2013, by
Updated: September 2013
Language: français
Type: Picture
Other articles (94)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 September 2013, byCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo; l’ajout d’une bannière l’ajout d’une image de fond;
-
Soumettre améliorations et plugins supplémentaires
10 April 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Emballe médias : à quoi cela sert?
4 February 2011, byCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel; un seul document ne peut être lié à un article dit "média";
On other websites (11446)
-
offloading to ffmpeg via named pipes in c#/dotnet core
1 April 2022, by bepI tried to break this down to the base elements so I hope this is clear. I want to take in a network stream, it may be a 1 way, it may be a protocol that requires 2 way communication, such as RTMP during handshake.


I want to pass that stream straight through to a spawned FFMPEG process. I then want to capture the output of FFMPEG, in this example I just want to pipe it out to a file. The file is not my end goal, but for simplicity if I can get that far I think I'll be ok.




I want the code to be as plain as possible and offload the core processing to FFMPEG. If I ask FFMPEG to output webrtc stream, a file, whatever, I just want to capture that. FFMPEG shouldn't be used directly, just indirectly via
IncomingConnectionHandler
.

Only other component is OBS, which I am using to create the RTMP stream coming in.


As things stand now, running this results in the following error, which I'm a little unclear on. I don't feel like I'm causing concurrent reads at any point.


System.InvalidOperationException: Concurrent reads are not allowed
 at Medallion.Shell.Throw`1.If(Boolean condition, String message)
 at Medallion.Shell.Streams.Pipe.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)
 at Medallion.Shell.Streams.Pipe.PipeOutputStream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
 at System.IO.Stream.ReadAsync(Memory`1 buffer, CancellationToken cancellationToken)
 at System.IO.StreamReader.ReadBufferAsync(CancellationToken cancellationToken)
 at System.IO.StreamReader.ReadLineAsyncInternal()
 at Medallion.Shell.Streams.MergedLinesEnumerable.GetEnumeratorInternal()+MoveNext()
 at System.String.Join(String separator, IEnumerable`1 values)
 at VideoIngest.IncomingRtmpConnectionHandler.OnConnectedAsync(ConnectionContext connection) in Program.cs:line 55
 at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.KestrelConnection`1.ExecuteAsync()



Code:


namespace VideoIngest
{
 public class IncomingRtmpConnectionHandler : ConnectionHandler
 {
 private readonly ILogger<incomingrtmpconnectionhandler> logger;

 public IncomingRtmpConnectionHandler(ILogger<incomingrtmpconnectionhandler> logger)
 {
 this.logger = logger;
 }

 public override async Task OnConnectedAsync(ConnectionContext connection)
 {
 logger?.LogInformation("connection started");

 var outputFileName = @"C:\Temp\bunny.mp4";

 var rtmpPassthroughPipeName = Guid.NewGuid().ToString();
 var cmdPath = @"C:\Opt\ffmpeg\bin\ffmpeg.exe";
 var cmdArgs = $"-i pipe:{rtmpPassthroughPipeName} -preset slow -c copy -f mp4 -y pipe:1";

 var cancellationToken = connection.ConnectionClosed;
 var rtmpStream = connection.Transport;

 using (var outputStream = new FileStream(outputFileName, FileMode.Create))
 using (var cmd = Command.Run(cmdPath, options: o => { o.StartInfo(i => i.Arguments = cmdArgs); o.CancellationToken(cancellationToken); }))
 {
 // create a pipe to pass the RTMP data straight to FFMPEG. This code should be dumb to proto etc being used
 var ffmpegPassthroughStream = new NamedPipeServerStream(rtmpPassthroughPipeName, PipeDirection.InOut, 10, PipeTransmissionMode.Byte, System.IO.Pipes.PipeOptions.Asynchronous);

 // take the network stream and pass data to/from ffmpeg process
 var fromFfmpegTask = ffmpegPassthroughStream.CopyToAsync(rtmpStream.Output.AsStream(), cancellationToken);
 var toFfmpegTask = rtmpStream.Input.AsStream().CopyToAsync(ffmpegPassthroughStream, cancellationToken);

 // take the ffmpeg process output (not stdout) into target file
 var outputTask = cmd.StandardOutput.PipeToAsync(outputStream);

 while (!outputTask.IsCompleted && !outputTask.IsCanceled)
 {
 var errs = cmd.GetOutputAndErrorLines();
 logger.LogInformation(string.Join(Environment.NewLine, errs));

 await Task.Delay(1000);
 }

 CommandResult result = result = cmd.Result;

 if (result != null && result.Success)
 {
 logger.LogInformation("Created file");
 }
 else
 {
 logger.LogError(result.StandardError);
 }
 }

 logger?.LogInformation("connection closed");
 }
 }

 public class Startup
 {
 public void ConfigureServices(IServiceCollection services) { }

 public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
 {
 app.Run(async (context) =>
 {
 var log = context.RequestServices.GetRequiredService>();
 await context.Response.WriteAsync("Hello World!");
 });
 }
 }

 public class Program
 {
 public static void Main(string[] args)
 {
 CreateHostBuilder(args).Build().Run();
 }

 public static IWebHostBuilder CreateHostBuilder(string[] args) =>
 WebHost
 .CreateDefaultBuilder(args)
 .ConfigureServices(services =>
 {
 services.AddLogging(options =>
 {
 options.AddDebug().AddConsole().SetMinimumLevel(LogLevel.Information);
 });
 })
 .UseKestrel(options =>
 {
 options.ListenAnyIP(15666, builder =>
 {
 builder.UseConnectionHandler<incomingrtmpconnectionhandler>();
 });

 options.ListenLocalhost(5000);

 // HTTPS 5001
 options.ListenLocalhost(5001, builder =>
 {
 builder.UseHttps();
 });
 })
 .UseStartup<startup>();
 }
 

}
</startup></incomingrtmpconnectionhandler></incomingrtmpconnectionhandler></incomingrtmpconnectionhandler>


Questions:


- 

- Is this a valid approach, do you see any fundamental issues?
- Is the pipe naming correct, is the convention just
pipe:someName
? - Any ideas on what specifically may be causing the
Concurrent reads are not allowed
? - If #3 is solved, does the rest of this seem valid?










-
bad audio mic recording quality with ffmpeg compared to sox
1 July 2021, by user2355330I am contacting you as after 3 days of searching I am stuck on a really simple point.


I want to record the sound of my mic on MacOS using ffmpeg.


I managed to do it using the following command:


ffmpeg -f avfoundation -audio_device_index 2 -i "none:-" -c:a pcm_s32l alexspeaking.wav -y -loglevel debug



The issue is that each time I am speaking, there are cracks and pop in the sound...


I tried to use sox and it gave me a perfect and crystal clear sound and I have no idea why... Below is the output of the sox command :


sox -t coreaudio "G935 Gaming Headset" toto.wav -V6
sox: SoX v
time: Nov 15 2020 01:06:02
uname: Darwin MacBook-Pro.local 20.5.0 Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64 x86_64
compiler: gcc Apple LLVM 12.0.0 (clang-1200.0.32.27)
arch: 1288 48 88 L
sox INFO coreaudio: Found Audio Device "DELL U2721DE"
sox INFO coreaudio: Found Audio Device "G935 Gaming "
sox DBUG coreaudio: audio device did not accept 2 channels. Use 1 channels instead.
sox DBUG coreaudio: audio device did not accept 44100 sample rate. Use 48000 instead.
Input File : 'G935 Gaming Headset' (coreaudio)
Channels : 1
Sample Rate : 48000
Precision : 32-bit
Sample Encoding: 32-bit Signed Integer PCM
Endian Type : little
Reverse Nibbles: no
Reverse Bits : no
sox INFO sox: Overwriting `toto.wav'
sox DBUG wav: Writing Wave file: Microsoft PCM format, 1 channel, 48000 samp/sec
sox DBUG wav: 192000 byte/sec, 4 block align, 32 bits/samp
Output File : 'toto.wav'
Channels : 1
Sample Rate : 48000
Precision : 32-bit
Sample Encoding: 32-bit Signed Integer PCM
Endian Type : little
Reverse Nibbles: no
Reverse Bits : no
Comment : 'Processed by SoX'
sox DBUG effects: sox_add_effect: extending effects table, new size = 8
sox INFO sox: effects chain: input 48000Hz 1 channels (multi) 32 bits unknown length
sox INFO sox: effects chain: output 48000Hz 1 channels (multi) 32 bits unknown length
sox DBUG sox: start-up time = 0.051332
In:0.00% 00:00:07.13 [00:00:00.00] Out:340k [ | ] Clip:0 ^C
sox DBUG input: output buffer still held 2048 samples; dropped.
Aborted.
sox DBUG wav: Finished writing Wave file, 1359872 data bytes 339968 samples



I am pretty sure the issue is linked to the way the encoding is done and the params I used with ffmpeg but I don't seem to be able to grasp which one I must use.


Any ideas if there are ffmpeg experts here ?


-
Mixed Reality WebRTC without Signalling Server
25 May 2021, by SilverLifeI am trying to find a way, which allows me to use Mixed Reality WebRTC (link to git-repo) without a signalling server.
In detail, I want to create a sdp-file from my ffmpeg Video sender and use this sdp-description in my unity-Project to bypass the signaling process and receive the ffmpeg video stream.
Is there a way of doing so with Mixed Reality WebRTC? I was already searching for the line of code, where the sdp-file is created within MR WebRTC but I didn´t find it.


I am relatively new to this topic and I am not sure if this works at all but since ffmpeg is not directly compatible with webrtc I was thinking that this might be the most promising approach.