Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (7742)

  • Applescript to batch convert videos with ffmpeg

    13 mai 2022, par Thad_Supersperm

    I'm trying to add padding to a large amount of videos in their folders. I created an app with AppleScriptEditor so I can drag and drop files and they're automatically converted. I found a script on the web, I edited it with the ffmpeg command I need, but it won't work because it wants to overwrite the source file.

    


    on open argv
    set paths to ""
    repeat with f in argv
        set paths to paths & quoted form of POSIX path of f & " "
    end repeat
    tell application "Terminal"
        do script "for f in " & paths & "; do ffmpeg -i \"$f\"  -vf pad=\"9/8*iw:ih:(ow-iw)/2:0:color=black\" \"$f\"; done"
        activate
    end tell
end open


    


    Note that I want to keep the filename, filetype and put the new file next to the old one but just add an underscore at the end of the new file, before the extension ; e.g. : file.ext. > file_.ext

    


  • Feed output of one filter to the input of one filter multiple times with ffmpeg

    27 août 2021, par kentcdodds

    I have the following ffmpeg commands to create a podcast episode :

    


    # remove all silence at start and end of the audio files
ffmpeg -i call.mp3 -af silenceremove=1:0:-50dB call1.mp3
ffmpeg -i response.mp3 -af silenceremove=1:0:-50dB response1.mp3

# remove silence longer than 1 second anywhere within the audio files
ffmpeg -i call1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB call2.mp3
ffmpeg -i response1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB response2.mp3

# normalize audio files
ffmpeg -i call2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 call3.mp3
ffmpeg -i response2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 response3.mp3

# cross fade audio files with intro/interstitial/outro
ffmpeg -i intro.mp3 -i call3.mp3 -i interstitial.mp3 -i response3.mp3 -i outro.mp3
  -filter_complex "[0][1]acrossfade=d=1:c2=nofade[a01];
                   [a01][2]acrossfade=d=1:c1=nofade[a02];
                   [a02][3]acrossfade=d=1:c2=nofade[a03];
                   [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    While this "works" fine, I can't help but feel like it would be more efficient to do this all in one ffmpeg command. Based on what I found online this should be possible, but I don't understand the syntax well enough to know how to make it work. Here's what I tried :

    


    ffmpeg -i intro.mp3 -i call.mp3 -i interstitial.mp3 -i response.mp3 -i outro.mp3
       -af [1]silenceremove=1:0:-50dB[trimmedCall]
       -af [3]silenceremove=1:0:-50dB[trimmedResponse]
       -af [trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]
       -af [trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]
       -af [noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]
       -af [noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]
      -filter_complex "[0][call]acrossfade=d=1:c2=nofade[a01];
                       [a01][2]acrossfade=d=1:c1=nofade[a02];
                       [a02][response]acrossfade=d=1:c2=nofade[a03];
                       [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    But I have a feeling I have a fundamental misunderstanding of this because I got this error which I don't understand :

    


    Stream specifier 'call' in filtergraph description 
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
       matches no streams.


    


    For added context, I'm running all these commands through @ffmpeg/ffmpeg so that last command actually looks like this (in JavaScript) :

    


    await ffmpeg.run(
  '-i', 'intro.mp3',
  '-i', 'call.mp3',
  '-i', 'interstitial.mp3',
  '-i', 'response.mp3',
  '-i', 'outro.mp3',
  '-af', '[1]silenceremove=1:0:-50dB[trimmedCall]',
  '-af', '[3]silenceremove=1:0:-50dB[trimmedResponse]',
  '-af', '[trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]',
  '-af', '[trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]',
  '-af', '[noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]',
  '-af', '[noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]',
  '-filter_complex', `
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
  `,
  'output.mp3',
)


    


  • How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?

    24 août 2021, par Norbert Towiański

    I want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :

    


    @IBAction func transmitBtnPressed(_ sender: Any) {&#xA;    ffmpeg_transmit()&#xA;}&#xA;&#xA;@IBAction func recordBtnPressed(_ sender: Any) {&#xA;    switch recordingState {&#xA;    case .idle:&#xA;        recordingState = .start&#xA;        startRecording()&#xA;        recordBtn.setTitle("Started", for: .normal)&#xA;        let urlToFile = URL(fileURLWithPath: outPipePath!)&#xA;        outputStream = OutputStream(url: urlToFile, append: false)&#xA;        outputStream!.open()&#xA;    case .capturing:&#xA;        recordingState = .end&#xA;        stopRecording()&#xA;        recordBtn.setTitle("End", for: .normal)&#xA;    default:&#xA;        break&#xA;    }&#xA;}&#xA;&#xA;override func viewDidLoad() {&#xA;    super.viewDidLoad()&#xA;    outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()&#xA;    self.setup()&#xA;}&#xA;&#xA;override func viewDidAppear(_ animated: Bool) {&#xA;    super.viewDidAppear(animated)&#xA;    setUpAuthStatus()&#xA;}&#xA;&#xA;func setUpAuthStatus() {&#xA;    if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {&#xA;        AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in&#xA;            DispatchQueue.main.async {&#xA;                if authorized {&#xA;                    self.setup()&#xA;                }&#xA;            }&#xA;        })&#xA;    }&#xA;}&#xA;&#xA;func setup() {&#xA;    self.session.sessionPreset = AVCaptureSession.Preset.high&#xA;    &#xA;    self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")&#xA;    if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {&#xA;        _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)&#xA;    }&#xA;    &#xA;    self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,&#xA;                                          fileType: AVFileType.m4a)&#xA;    self.assetWriter!.movieFragmentInterval = CMTime.invalid&#xA;    self.assetWriter!.shouldOptimizeForNetworkUse = true&#xA;    &#xA;    let audioSettings = [&#xA;        AVFormatIDKey: kAudioFormatLinearPCM,&#xA;        AVSampleRateKey: 48000.0,&#xA;        AVNumberOfChannelsKey: 1,&#xA;        AVLinearPCMIsFloatKey: false,&#xA;        AVLinearPCMBitDepthKey: 16,&#xA;        AVLinearPCMIsBigEndianKey: false,&#xA;        AVLinearPCMIsNonInterleaved: false,&#xA;        &#xA;    ] as [String : Any]&#xA;    &#xA;    &#xA;    self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,&#xA;                                         outputSettings: audioSettings)&#xA;    &#xA;    self.audioInput?.expectsMediaDataInRealTime = true&#xA;            &#xA;    if self.assetWriter!.canAdd(self.audioInput!) {&#xA;        self.assetWriter?.add(self.audioInput!)&#xA;    }&#xA;    &#xA;    self.session.startRunning()&#xA;    &#xA;    DispatchQueue.main.async {&#xA;        self.session.beginConfiguration()&#xA;        &#xA;        self.session.commitConfiguration()&#xA;        &#xA;        let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)&#xA;        let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)&#xA;        &#xA;        if self.session.canAddInput(audioIn!) {&#xA;            self.session.addInput(audioIn!)&#xA;        }&#xA;        &#xA;        if self.session.canAddOutput(self.audioOutput) {&#xA;            self.session.addOutput(self.audioOutput)&#xA;        }&#xA;        &#xA;        self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)&#xA;    }&#xA;}&#xA;&#xA;func startRecording() {&#xA;    if self.assetWriter?.startWriting() != true {&#xA;        print("error: \(self.assetWriter?.error.debugDescription ?? "")")&#xA;    }&#xA;    &#xA;    self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)&#xA;}&#xA;&#xA;func stopRecording() {&#xA;    self.audioOutput.setSampleBufferDelegate(nil, queue: nil)&#xA;    &#xA;    self.assetWriter?.finishWriting {&#xA;        print("Saved in folder \(self.recordingURL!)")&#xA;    }&#xA;}&#xA;func captureOutput(_ captureOutput: AVCaptureOutput, didOutput&#xA;                    sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {&#xA;    &#xA;    if !self.isRecordingSessionStarted {&#xA;        let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)&#xA;        self.assetWriter?.startSession(atSourceTime: presentationTime)&#xA;        self.isRecordingSessionStarted = true&#xA;        recordingState = .capturing&#xA;    }&#xA;    &#xA;    var blockBuffer: CMBlockBuffer?&#xA;    var audioBufferList: AudioBufferList = AudioBufferList.init()&#xA;    &#xA;    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &amp;audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &amp;blockBuffer)&#xA;    let buffers = UnsafeMutableAudioBufferListPointer(&amp;audioBufferList)&#xA;    &#xA;    for buffer in buffers {&#xA;        let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)&#xA;        let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))&#xA;        &#xA;        if (output == -1) {&#xA;            let error = outputStream?.streamError&#xA;            print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")&#xA;        }&#xA;        else {&#xA;            print("\(#file) > \(#function) > Data sent")&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;func ffmpeg_transmit() {&#xA;    &#xA;    let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "&#xA;    let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"&#xA;    let cmd = cmd1 &#x2B; outPipePath! &#x2B; cmd2&#xA;    &#xA;    print(cmd)&#xA;    &#xA;    ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in&#xA;        &#xA;        let state = ffmpegSession?.getState()&#xA;        let returnCode = ffmpegSession?.getReturnCode()&#xA;        if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {&#xA;            print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")&#xA;        }&#xA;    }, withLogCallback: { log in&#xA;        &#xA;    }, withStatisticsCallback: { statistics in&#xA;        &#xA;    })&#xA;}&#xA;</audiobufferlist>

    &#xA;

    I want to use MobileVLCKit in that way :

    &#xA;

    func startStream(){&#xA;    guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}&#xA;    audioPlayer!.media = VLCMedia(url: url)&#xA;&#xA;    audioPlayer!.media.addOption( "-vv")&#xA;    audioPlayer!.media.addOption( "--network-caching=10000")&#xA;&#xA;    audioPlayer!.delegate = self&#xA;    audioPlayer!.audio.volume = 100&#xA;&#xA;    audioPlayer!.play()&#xA;&#xA;}&#xA;

    &#xA;

    Could you give me some hints how to implement that ?

    &#xA;