
Recherche avancée
Médias (33)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (55)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (7705)
-
AVAssetWriter creating mp4 with no sound in last 50msec
12 août 2015, par Joseph KI’m working on a project involving live streaming from the iPhone’s camera.
To minimize loss during AVAssetWriter finishWriting, I use an array of 2 asset writers and swap them whenever I need to create an mp4 fragment out of the recorded buffers.
Code responsible for capturing Audio & Video sample buffers
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
if CMSampleBufferDataIsReady(sampleBuffer) <= 0 {
println("Skipped sample because it was not ready")
return
}
if captureOutput == audioOutput {
if audioWriterBuffers[0].readyForMoreMediaData() {
if !writers[0].appendAudio(sampleBuffer) { println("Failed to append: \(recordingPackages[0].name). Error: \(recordingPackages[0].outputWriter.error.localizedDescription)") }
else {
writtenAudioFrames++
if writtenAudioFrames == framesPerFragment {
writeFragment()
}
}
}
else {
println("Skipped audio sample; it is not ready.")
}
}
else if captureOutput == videoOutput {
//Video sample buffer
if videoWriterBuffers[0].readyForMoreMediaData() {
//Call startSessionAtSourceTime if needed
//Append sample buffer with a source time
}
}
}Code responsible for the writing and swapping
func writeFragment() {
writtenAudioFrames = 0
swap(&writers[0], &writers[1])
if !writers[0].startWriting() {println( "Failed to start OTHER writer writing") }
else { startTime = CFAbsoluteTimeGetCurrent() }
audioWriterBuffers[0].markAsFinished()
videoWriterBuffers[0].markAsFinished()
writers[1].outputWriter.finishWritingWithCompletionHandler { () -> Void in
println("Finish Package record Writing, now Resetting")
//
// Handle written MP4 fragment code
//
//Reset Writer
//Basically reallocate it as a new AVAssetWriter with a given URL and MPEG4 file Type and add inputs to it
self.resetWriter()
}The issue at hand
The written MP4 fragments are being sent over to a local sandbox server to be analyzed.
When MP4 fragments are stitched together using FFMpeg, there is a noticeable glitch in sound due to the fact that there is not audio at the last 50msec of every fragment.
My audio AVAssetWriterInput’s settings are the following :
static let audioSettings: [NSObject : AnyObject]! =
[
AVFormatIDKey : NSNumber(integer: kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : NSNumber(integer: 1),
AVSampleRateKey : NSNumber(int: 44100),
AVEncoderBitRateKey : NSNumber(int: 64000),
]As such, I encode 44 audio sample buffers every second. They are all being successfully appended.
Further resources
Here’s a waveform display of the audio stream after concatenating the mp4 fragments
!! Note that my fragments are about 2secs in length.
!! Note that I’m focusing on audio since video frames are extremely smooth when jumping from one fragment to another.Any idea as to what is causing this ? I can provide further code or info if needed.
-
Encoding video settings with Transloadit and FFMPEG
2 octobre 2015, par David SolerI’m using Transloadit to convert and compress videos from .mov to .ts format. I’m using the json templates but unfortunately the docs are not too extensive. The thing is the quality I’m getting rigth now is very poor and pixeled. If I do it through console with ffmpeg command and including some parameters as crf (Constant Rate Factor) the quality gets a lot better but I dont know how edit it in transloadit template to get the same result.
This is the ffmpeg command I’m using to convert the video in console
./ffmpeg -i ../canales.mov -c:v libx264 -crf 23 -bsf:a aac_adtstoasc output.ts
And this is the json template I’m using right now. I guess I should add parameters to ffmpeg hash but I don’t know which settings are allowed
{
"steps": {
"file": {
"robot": "/file/filter",
"accepts": [
[
"${file.mime}",
"regex",
"video"
]
],
"declines": [
[
"${file.size}",
">",
"10485760"
],
[
"${file.meta.duration}",
">",
"16"
]
],
"error_on_decline": true
},
"segments": {
"robot": "/video/encode",
"preset": "iphone-high",
"width": 1242,
"height": 2208,
"use": "file",
"segment": true,
"segment_duration": 10,
"ffmpeg_stack": "v2.2.3",
"ffmpeg": {
"b": "1200K",
"crf": 23
}
},
"thumb": {
"robot": "/video/thumbs",
"use": "file",
"count": 1
},
"store": {
"robot": "/s3/store",
"use": [
"segments",
"thumb"
],
"key": "key",
"secret": "Secret",
"bucket": "bucket"
}
}
} -
iOS - How can I stream Encoded Video Frames(from AVFoundation and VideoToolBox) from device to server via RTP
2 octobre 2015, par ASP PeekI am trying to stream live video from my iPhone device to server using RTP.
Using AVFoundation’s AVCaptureVideoDataOutput, I was able to get CMSampleBuffer for video. I then feed these frames as they arrive into VideoToolBox’s VTCompressionSessionEncodeFrame() and is able to get Encoded CMSampleBuffer.Now to send these encoded Frames via RTP, I came across FFMPEG and found its built library for iOS device. (https://github.com/kewlbear/FFmpeg-iOS-build-script)
However I am not able to find any iOS example or sample code or any documentation that explains the process of sending the encoded frames via RTP for iOS apps.
Is there any existing example or documentation that can explain me how can I send the encoded CMSampleBuffers to server via RTP using FFMPEG.
Thanks in Advance :)