Recherche avancée

Médias (91)

Autres articles (106)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (10172)

  • How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?

    24 août 2021, par Norbert Towiański

    I want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :

    


    @IBAction func transmitBtnPressed(_ sender: Any) {&#xA;    ffmpeg_transmit()&#xA;}&#xA;&#xA;@IBAction func recordBtnPressed(_ sender: Any) {&#xA;    switch recordingState {&#xA;    case .idle:&#xA;        recordingState = .start&#xA;        startRecording()&#xA;        recordBtn.setTitle("Started", for: .normal)&#xA;        let urlToFile = URL(fileURLWithPath: outPipePath!)&#xA;        outputStream = OutputStream(url: urlToFile, append: false)&#xA;        outputStream!.open()&#xA;    case .capturing:&#xA;        recordingState = .end&#xA;        stopRecording()&#xA;        recordBtn.setTitle("End", for: .normal)&#xA;    default:&#xA;        break&#xA;    }&#xA;}&#xA;&#xA;override func viewDidLoad() {&#xA;    super.viewDidLoad()&#xA;    outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()&#xA;    self.setup()&#xA;}&#xA;&#xA;override func viewDidAppear(_ animated: Bool) {&#xA;    super.viewDidAppear(animated)&#xA;    setUpAuthStatus()&#xA;}&#xA;&#xA;func setUpAuthStatus() {&#xA;    if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {&#xA;        AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in&#xA;            DispatchQueue.main.async {&#xA;                if authorized {&#xA;                    self.setup()&#xA;                }&#xA;            }&#xA;        })&#xA;    }&#xA;}&#xA;&#xA;func setup() {&#xA;    self.session.sessionPreset = AVCaptureSession.Preset.high&#xA;    &#xA;    self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")&#xA;    if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {&#xA;        _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)&#xA;    }&#xA;    &#xA;    self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,&#xA;                                          fileType: AVFileType.m4a)&#xA;    self.assetWriter!.movieFragmentInterval = CMTime.invalid&#xA;    self.assetWriter!.shouldOptimizeForNetworkUse = true&#xA;    &#xA;    let audioSettings = [&#xA;        AVFormatIDKey: kAudioFormatLinearPCM,&#xA;        AVSampleRateKey: 48000.0,&#xA;        AVNumberOfChannelsKey: 1,&#xA;        AVLinearPCMIsFloatKey: false,&#xA;        AVLinearPCMBitDepthKey: 16,&#xA;        AVLinearPCMIsBigEndianKey: false,&#xA;        AVLinearPCMIsNonInterleaved: false,&#xA;        &#xA;    ] as [String : Any]&#xA;    &#xA;    &#xA;    self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,&#xA;                                         outputSettings: audioSettings)&#xA;    &#xA;    self.audioInput?.expectsMediaDataInRealTime = true&#xA;            &#xA;    if self.assetWriter!.canAdd(self.audioInput!) {&#xA;        self.assetWriter?.add(self.audioInput!)&#xA;    }&#xA;    &#xA;    self.session.startRunning()&#xA;    &#xA;    DispatchQueue.main.async {&#xA;        self.session.beginConfiguration()&#xA;        &#xA;        self.session.commitConfiguration()&#xA;        &#xA;        let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)&#xA;        let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)&#xA;        &#xA;        if self.session.canAddInput(audioIn!) {&#xA;            self.session.addInput(audioIn!)&#xA;        }&#xA;        &#xA;        if self.session.canAddOutput(self.audioOutput) {&#xA;            self.session.addOutput(self.audioOutput)&#xA;        }&#xA;        &#xA;        self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)&#xA;    }&#xA;}&#xA;&#xA;func startRecording() {&#xA;    if self.assetWriter?.startWriting() != true {&#xA;        print("error: \(self.assetWriter?.error.debugDescription ?? "")")&#xA;    }&#xA;    &#xA;    self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)&#xA;}&#xA;&#xA;func stopRecording() {&#xA;    self.audioOutput.setSampleBufferDelegate(nil, queue: nil)&#xA;    &#xA;    self.assetWriter?.finishWriting {&#xA;        print("Saved in folder \(self.recordingURL!)")&#xA;    }&#xA;}&#xA;func captureOutput(_ captureOutput: AVCaptureOutput, didOutput&#xA;                    sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {&#xA;    &#xA;    if !self.isRecordingSessionStarted {&#xA;        let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)&#xA;        self.assetWriter?.startSession(atSourceTime: presentationTime)&#xA;        self.isRecordingSessionStarted = true&#xA;        recordingState = .capturing&#xA;    }&#xA;    &#xA;    var blockBuffer: CMBlockBuffer?&#xA;    var audioBufferList: AudioBufferList = AudioBufferList.init()&#xA;    &#xA;    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &amp;audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &amp;blockBuffer)&#xA;    let buffers = UnsafeMutableAudioBufferListPointer(&amp;audioBufferList)&#xA;    &#xA;    for buffer in buffers {&#xA;        let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)&#xA;        let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))&#xA;        &#xA;        if (output == -1) {&#xA;            let error = outputStream?.streamError&#xA;            print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")&#xA;        }&#xA;        else {&#xA;            print("\(#file) > \(#function) > Data sent")&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;func ffmpeg_transmit() {&#xA;    &#xA;    let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "&#xA;    let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"&#xA;    let cmd = cmd1 &#x2B; outPipePath! &#x2B; cmd2&#xA;    &#xA;    print(cmd)&#xA;    &#xA;    ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in&#xA;        &#xA;        let state = ffmpegSession?.getState()&#xA;        let returnCode = ffmpegSession?.getReturnCode()&#xA;        if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {&#xA;            print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")&#xA;        }&#xA;    }, withLogCallback: { log in&#xA;        &#xA;    }, withStatisticsCallback: { statistics in&#xA;        &#xA;    })&#xA;}&#xA;</audiobufferlist>

    &#xA;

    I want to use MobileVLCKit in that way :

    &#xA;

    func startStream(){&#xA;    guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}&#xA;    audioPlayer!.media = VLCMedia(url: url)&#xA;&#xA;    audioPlayer!.media.addOption( "-vv")&#xA;    audioPlayer!.media.addOption( "--network-caching=10000")&#xA;&#xA;    audioPlayer!.delegate = self&#xA;    audioPlayer!.audio.volume = 100&#xA;&#xA;    audioPlayer!.play()&#xA;&#xA;}&#xA;

    &#xA;

    Could you give me some hints how to implement that ?

    &#xA;

  • Capture cctv camera with iphone application using IP

    22 septembre 2021, par Bittu

    I want to develop a cctv camera app and I don't know what steps I need to take.&#xA;i have the data below for connecting cctv camera :

    &#xA;&#xA;

      &#xA;
    • Ip address
    • &#xA;

    • port ID
    • &#xA;

    • user name
    • &#xA;

    • password
    • &#xA;

    &#xA;&#xA;

    i checked live555 and RTMPStreamPublisher demo from here, but i don't know where I should start. i also read that i should use the ffmpeg framework.

    &#xA;&#xA;

    What I want is an app similar to kView on itunes. This app is able to stream a cctv camera feed with the above configuration detials

    &#xA;&#xA;

    Does anyone know what direction I need to go in ? Is there a demo or open-source app that accomplishes this ?

    &#xA;

  • Using FFMPEG library with iPhone SDK for video encoding

    7 août 2020, par user203349

    I need to encode several pictures shot by the iPhone camera into a mp4 video file and I know FFMPEG can do this (the application TimeLapser and ReelMoments do it already). I plan to use this in my app iMotion (available in the appstore).

    &#xA;

    I successfully install and compile the ffmpeg for the iphone SDK with this link :&#xA;http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2009-October/076618.html

    &#xA;

    But now I'm stuck here in my XCode project. What should I do next to use the FFMPEG library for video encoding ? The Apple documentation about external library using is very light and I just can find any tutorial on the web which explains how to do this.

    &#xA;