
Recherche avancée
Autres articles (67)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (3670)
-
Transcode to opus by Fluent-ffmpeg or ffmpeg from command line
14 juillet 2017, par shamaleyteMy purpose is to transcode a webm file into opus file.
It works just fine as the following ;ffmpeg -i input.webm -vn -c:a copy output.opus
But the generated opus file always starts from 4rd or 5th seconds when I play it. It seems like that the first seconds are lost. Any idea why it happens ?
>ffmpeg -i x.webm -vn -c:a copy x1.opus
ffmpeg version N-86175-g64ea4d1 Copyright (c) 2000-2017 the FFmpeg
developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid -
-enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-
avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls
--enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-
libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-
libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb -
-enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --
enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --
enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab -
-enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-
libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-
libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Input #0, matroska,webm, from 'x.webm':
Metadata:
encoder : libwebm-0.2.1.0
creation_time : 2017-06-19T20:50:21.722000Z
Duration: 00:00:32.33, start: 0.000000, bitrate: 134 kb/s
Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
Stream #0:1(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR
4:3, 16.67 fps, 16.67 tbr, 1k tbn, 1k tbc (default)
Output #0, opus, to 'x1.opus':
Metadata:
encoder : Lavf57.72.101
Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
Metadata:
encoder : Lavf57.72.101
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
size= 114kB time=00:00:32.33 bitrate= 28.8kbits/s speed=3.22e+003x
video:0kB audio:111kB subtitle:0kB other streams:0kB global headers:0kB
muxing overhead: 2.152229%It is jumping from 0 to 4th second .
Please take a look at this screencast.
https://www.screenmailer.com/v/52IXnpAarHavwJEThis is the sample video file that I tried to transcode : https://drive.google.com/open?id=0B2sa3oV_Y3X_ZmVWX3MzTlRPSmc
So I guess the transcoding starts right at the point that the voice comes in, why is that ?
-
How to convert a Stream on the fly with FFMpegCore ?
18 octobre 2023, par AdrianFor a school project, I need to stream videos that I get from torrents while they are downloading on the server.
When the video is a .mp4 file, there's no problem, but I must also be able to stream .mkv files, and for that I need to convert them into .mp4 before sending them to the client, and I can't find a way to convert my Stream that I get from MonoTorrents with FFMpegCore into a Stream that I can send to my client.


Here is the code I wrote to simply download and stream my torrent :


var cEngine = new ClientEngine();

var manager = await cEngine.AddStreamingAsync(GenerateMagnet(torrent), ) ?? throw new Exception("An error occurred while creating the torrent manager");

await manager.StartAsync();
await manager.WaitForMetadataAsync();

var videoFile = manager.Files.OrderByDescending(f => f.Length).FirstOrDefault();
if (videoFile == null)
 return Results.NotFound();

var stream = await manager.StreamProvider!.CreateStreamAsync(videoFile, true);
return Results.File(stream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);



I saw that the most common way to convert videos is by using ffmpeg. .NET has a package called
FFMpefCore
that is a wrapper for ffmpeg.

To my previous code, I would add right before the
return
:

if (!videoFile.Path.EndsWith(".mp4"))
{
 var outputStream = new MemoryStream();
 FFMpegArguments
 .FromPipeInput(new StreamPipeSource(stream), options =>
 {
 options.ForceFormat("mp4");
 })
 .OutputToPipe(new StreamPipeSink(outputStream))
 .ProcessAsynchronously();
 return Results.File(outputStream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);
}



I unfortunately can't get a "live" Stream to send to my client.


-
AVAssetWriter creating mp4 with no sound in last 50msec
12 août 2015, par Joseph KI’m working on a project involving live streaming from the iPhone’s camera.
To minimize loss during AVAssetWriter finishWriting, I use an array of 2 asset writers and swap them whenever I need to create an mp4 fragment out of the recorded buffers.
Code responsible for capturing Audio & Video sample buffers
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
if CMSampleBufferDataIsReady(sampleBuffer) <= 0 {
println("Skipped sample because it was not ready")
return
}
if captureOutput == audioOutput {
if audioWriterBuffers[0].readyForMoreMediaData() {
if !writers[0].appendAudio(sampleBuffer) { println("Failed to append: \(recordingPackages[0].name). Error: \(recordingPackages[0].outputWriter.error.localizedDescription)") }
else {
writtenAudioFrames++
if writtenAudioFrames == framesPerFragment {
writeFragment()
}
}
}
else {
println("Skipped audio sample; it is not ready.")
}
}
else if captureOutput == videoOutput {
//Video sample buffer
if videoWriterBuffers[0].readyForMoreMediaData() {
//Call startSessionAtSourceTime if needed
//Append sample buffer with a source time
}
}
}Code responsible for the writing and swapping
func writeFragment() {
writtenAudioFrames = 0
swap(&writers[0], &writers[1])
if !writers[0].startWriting() {println( "Failed to start OTHER writer writing") }
else { startTime = CFAbsoluteTimeGetCurrent() }
audioWriterBuffers[0].markAsFinished()
videoWriterBuffers[0].markAsFinished()
writers[1].outputWriter.finishWritingWithCompletionHandler { () -> Void in
println("Finish Package record Writing, now Resetting")
//
// Handle written MP4 fragment code
//
//Reset Writer
//Basically reallocate it as a new AVAssetWriter with a given URL and MPEG4 file Type and add inputs to it
self.resetWriter()
}The issue at hand
The written MP4 fragments are being sent over to a local sandbox server to be analyzed.
When MP4 fragments are stitched together using FFMpeg, there is a noticeable glitch in sound due to the fact that there is not audio at the last 50msec of every fragment.
My audio AVAssetWriterInput’s settings are the following :
static let audioSettings: [NSObject : AnyObject]! =
[
AVFormatIDKey : NSNumber(integer: kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : NSNumber(integer: 1),
AVSampleRateKey : NSNumber(int: 44100),
AVEncoderBitRateKey : NSNumber(int: 64000),
]As such, I encode 44 audio sample buffers every second. They are all being successfully appended.
Further resources
Here’s a waveform display of the audio stream after concatenating the mp4 fragments
!! Note that my fragments are about 2secs in length.
!! Note that I’m focusing on audio since video frames are extremely smooth when jumping from one fragment to another.Any idea as to what is causing this ? I can provide further code or info if needed.