
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (102)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (9349)
-
How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?
24 août 2021, par Norbert TowiańskiI want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :


@IBAction func transmitBtnPressed(_ sender: Any) {
 ffmpeg_transmit()
}

@IBAction func recordBtnPressed(_ sender: Any) {
 switch recordingState {
 case .idle:
 recordingState = .start
 startRecording()
 recordBtn.setTitle("Started", for: .normal)
 let urlToFile = URL(fileURLWithPath: outPipePath!)
 outputStream = OutputStream(url: urlToFile, append: false)
 outputStream!.open()
 case .capturing:
 recordingState = .end
 stopRecording()
 recordBtn.setTitle("End", for: .normal)
 default:
 break
 }
}

override func viewDidLoad() {
 super.viewDidLoad()
 outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()
 self.setup()
}

override func viewDidAppear(_ animated: Bool) {
 super.viewDidAppear(animated)
 setUpAuthStatus()
}

func setUpAuthStatus() {
 if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {
 AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in
 DispatchQueue.main.async {
 if authorized {
 self.setup()
 }
 }
 })
 }
}

func setup() {
 self.session.sessionPreset = AVCaptureSession.Preset.high
 
 self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")
 if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {
 _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)
 }
 
 self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,
 fileType: AVFileType.m4a)
 self.assetWriter!.movieFragmentInterval = CMTime.invalid
 self.assetWriter!.shouldOptimizeForNetworkUse = true
 
 let audioSettings = [
 AVFormatIDKey: kAudioFormatLinearPCM,
 AVSampleRateKey: 48000.0,
 AVNumberOfChannelsKey: 1,
 AVLinearPCMIsFloatKey: false,
 AVLinearPCMBitDepthKey: 16,
 AVLinearPCMIsBigEndianKey: false,
 AVLinearPCMIsNonInterleaved: false,
 
 ] as [String : Any]
 
 
 self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,
 outputSettings: audioSettings)
 
 self.audioInput?.expectsMediaDataInRealTime = true
 
 if self.assetWriter!.canAdd(self.audioInput!) {
 self.assetWriter?.add(self.audioInput!)
 }
 
 self.session.startRunning()
 
 DispatchQueue.main.async {
 self.session.beginConfiguration()
 
 self.session.commitConfiguration()
 
 let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
 let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)
 
 if self.session.canAddInput(audioIn!) {
 self.session.addInput(audioIn!)
 }
 
 if self.session.canAddOutput(self.audioOutput) {
 self.session.addOutput(self.audioOutput)
 }
 
 self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)
 }
}

func startRecording() {
 if self.assetWriter?.startWriting() != true {
 print("error: \(self.assetWriter?.error.debugDescription ?? "")")
 }
 
 self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)
}

func stopRecording() {
 self.audioOutput.setSampleBufferDelegate(nil, queue: nil)
 
 self.assetWriter?.finishWriting {
 print("Saved in folder \(self.recordingURL!)")
 }
}
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput
 sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
 
 if !self.isRecordingSessionStarted {
 let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
 self.assetWriter?.startSession(atSourceTime: presentationTime)
 self.isRecordingSessionStarted = true
 recordingState = .capturing
 }
 
 var blockBuffer: CMBlockBuffer?
 var audioBufferList: AudioBufferList = AudioBufferList.init()
 
 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &blockBuffer)
 let buffers = UnsafeMutableAudioBufferListPointer(&audioBufferList)
 
 for buffer in buffers {
 let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)
 let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))
 
 if (output == -1) {
 let error = outputStream?.streamError
 print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")
 }
 else {
 print("\(#file) > \(#function) > Data sent")
 }
 }
}

func ffmpeg_transmit() {
 
 let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "
 let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"
 let cmd = cmd1 + outPipePath! + cmd2
 
 print(cmd)
 
 ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in
 
 let state = ffmpegSession?.getState()
 let returnCode = ffmpegSession?.getReturnCode()
 if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {
 print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")
 }
 }, withLogCallback: { log in
 
 }, withStatisticsCallback: { statistics in
 
 })
}
</audiobufferlist>


I want to use MobileVLCKit in that way :


func startStream(){
 guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}
 audioPlayer!.media = VLCMedia(url: url)

 audioPlayer!.media.addOption( "-vv")
 audioPlayer!.media.addOption( "--network-caching=10000")

 audioPlayer!.delegate = self
 audioPlayer!.audio.volume = 100

 audioPlayer!.play()

}



Could you give me some hints how to implement that ?


-
converting images to mp4 using ffmpeg on iphone
29 novembre 2011, par user633901Up till now, i can create mpeg1 but with no luck for mp4.Maybe we can talk and share information.Someone told me that i have to set some flags for using mp4.But i am stuck at using it...
following is the working code :
av_register_all();
printf("Video encoding\n");
/// find the mpeg video encoder
//codec=avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
codec = avcodec_find_encoder(CODEC_ID_MPEG4);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c = avcodec_alloc_context();
picture = avcodec_alloc_frame();
// put sample parameters
c->bit_rate = 400000;
/// resolution must be a multiple of two
c->width = 240;
c->height = 320;
//c->codec_id = fmt->video_codec;
//frames per second
c->time_base= (AVRational){1,25};
c->gop_size = 10; /// emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt =PIX_FMT_YUV420P; // PIX_FMT_YUV420P
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen([[NSHomeDirectory() stringByAppendingPathComponent:@"test.mp4"] UTF8String], "wb");
if (!f) {
fprintf(stderr, "could not open %s\n",[@"test.mp4" UTF8String]);
exit(1);
}
// alloc image and output buffer
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
#pragma mark -
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height); //this is half size of numbytes.
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
for(k=0;k<1;k++) {
for(i=0;i<25;i++) {
fflush(stdout);
int numBytes = avpicture_get_size(PIX_FMT_RGBA, c->width, c->height);
uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i+1]];
CGImageRef newCgImage = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
///////////////////////////
//outbuffer=(uint8_t *)CFDataGetBytePtr(bitmapData);
//////////////////////////
avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGBA, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);//does not have image data.
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGBA,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
// encode the image
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
free(buffer);
buffer = NULL;
}
// get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, outbuf_size, f);
}
}
// add sequence end code to have a real mpeg file
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
//av_free(outpic);
printf("\n");my msn:hieeli@hotmail.com
-
How to play mp4 video file with HTML5 video tag on iOS (iPhone and iPad) ?
22 avril 2015, par MokielasI want to display HTML5 video to my users using the video tag. For Firefox, Chrome and Opera WEBM works as expected. In Safari on Windows and Mac my MP4 version works, too. The only problem I’m experiencing is, that it won’t play on iPad and iPhone (Safari of course).
Create video
The MP4 (h.264 + acc-lc) is converted like this (with profile : baseline and level 3.0 for maximum compatibility with iOS) :
- Stream #0:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x352, 198 kb/s, 17 fps, 17 tbr, 17408 tbn, 34 tbc (default)
- Stream #0:1(eng) : Audio : aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 56 kb/s (default)
Edit : Whole ffprobe output (slight changes in bitrate etc. to the above mentioned) :
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.30.100
Duration: 00:01:00.05, start: 0.046440, bitrate: 289 kb/s
Stream #0:0(eng): Video: h264 (Constrained Baseline)
(avc1 /0x31637661), yuv420p, 640x352, 198 kb/s, 25 fps, 25 tbr,
12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz,
stereo, fltp, 85 kb/s (default)
Metadata:
handler_name : SoundHandlerI found various requirements for iOS devices like this and this, also someone mentioned to add the yuv420p pixel format when converting.
In fact the ffmpeg cmd looks like this :
ffmpeg -i __inputfile__ -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -level 3.0 -b:v 200K -r 17 -bt 800K -c:a libfdk_aac -b:a 85k -ar 44100 -y __outputfile_lowversion__.mp4
Display video
With Modernizr I detect which format is "supported" and add it to the
src
or thevideo
tag. Last thing is adding the right MIME type. For mp4 I addtype="video/mp4"
. The full code for thevideo
tag is :<video class="p-video" preload="auto" autoplay="" type="video/mp4" src="http://full.url/to/video_low.mp4"></video>
I tried various ways : own implementation with own interface, controls and stuff from browser vendors and video.js just to check whether i’m too studip for this. All work in the environments listed above except for iPhone and iPad.
I read this article on Video on the web, especially this part and only serve the "right" file with the "right" type without a
poster
attribute set.My Apache has
AddType video/mp4 mp4 m4v f4v f4p
AddType video/ogg ogv
AddType video/webm webmAnd byte-ranges are enabled. This is needed to get partial content from the server.
Has anyone a clue what’s going on there ? Thanks in advance !
Edit : Safari and Chrome both throw
MEDIA_ERR_SRC_NOT_SUPPORTED
Error on iPad. There must be an issue with the encoding.