Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (108)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (13487)

  • Encoding video settings with Transloadit and FFMPEG

    2 octobre 2015, par David Soler

    I’m using Transloadit to convert and compress videos from .mov to .ts format. I’m using the json templates but unfortunately the docs are not too extensive. The thing is the quality I’m getting rigth now is very poor and pixeled. If I do it through console with ffmpeg command and including some parameters as crf (Constant Rate Factor) the quality gets a lot better but I dont know how edit it in transloadit template to get the same result.

    This is the ffmpeg command I’m using to convert the video in console

    ./ffmpeg -i ../canales.mov -c:v libx264 -crf 23 -bsf:a aac_adtstoasc output.ts

    And this is the json template I’m using right now. I guess I should add parameters to ffmpeg hash but I don’t know which settings are allowed

    {
     "steps": {
       "file": {
         "robot": "/file/filter",
         "accepts": [
           [
             "${file.mime}",
             "regex",
             "video"
           ]
         ],
         "declines": [
           [
             "${file.size}",
             ">",
             "10485760"
           ],
           [
             "${file.meta.duration}",
             ">",
             "16"
           ]
         ],
         "error_on_decline": true
       },
       "segments": {
         "robot": "/video/encode",
         "preset": "iphone-high",
         "width": 1242,
         "height": 2208,
         "use": "file",
         "segment": true,
         "segment_duration": 10,
         "ffmpeg_stack": "v2.2.3",
         "ffmpeg": {
           "b": "1200K",
           "crf": 23
         }
       },
       "thumb": {
         "robot": "/video/thumbs",
         "use": "file",
         "count": 1
       },
       "store": {
         "robot": "/s3/store",
         "use": [
           "segments",
           "thumb"
         ],
         "key": "key",
         "secret": "Secret",
         "bucket": "bucket"
       }
     }
    }
  • AVAssetWriter creating mp4 with no sound in last 50msec

    12 août 2015, par Joseph K

    I’m working on a project involving live streaming from the iPhone’s camera.

    To minimize loss during AVAssetWriter finishWriting, I use an array of 2 asset writers and swap them whenever I need to create an mp4 fragment out of the recorded buffers.

    Code responsible for capturing Audio & Video sample buffers

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

       if CMSampleBufferDataIsReady(sampleBuffer) <= 0 {
               println("Skipped sample because it was not ready")
               return
           }

       if captureOutput == audioOutput  {

           if audioWriterBuffers[0].readyForMoreMediaData() {
               if !writers[0].appendAudio(sampleBuffer) { println("Failed to append: \(recordingPackages[0].name). Error: \(recordingPackages[0].outputWriter.error.localizedDescription)") }
               else {
                   writtenAudioFrames++

                   if writtenAudioFrames == framesPerFragment {
                       writeFragment()
                   }
               }
           }
           else {
               println("Skipped audio sample; it is not ready.")
           }
       }

       else if captureOutput == videoOutput {
           //Video sample buffer
           if videoWriterBuffers[0].readyForMoreMediaData() {
               //Call startSessionAtSourceTime if needed
               //Append sample buffer with a source time
           }
       }
    }

    Code responsible for the writing and swapping

    func writeFragment() {
       writtenAudioFrames = 0

       swap(&writers[0], &writers[1])
       if !writers[0].startWriting() {println( "Failed to start OTHER writer writing") }
       else { startTime  = CFAbsoluteTimeGetCurrent() }

       audioWriterBuffers[0].markAsFinished()
       videoWriterBuffers[0].markAsFinished()

       writers[1].outputWriter.finishWritingWithCompletionHandler { () -> Void in
           println("Finish Package record Writing, now Resetting")
           //
           // Handle written MP4 fragment code
           //

           //Reset Writer
           //Basically reallocate it as a new AVAssetWriter with a given URL and MPEG4 file Type and add inputs to it
           self.resetWriter()
       }

    The issue at hand

    The written MP4 fragments are being sent over to a local sandbox server to be analyzed.

    When MP4 fragments are stitched together using FFMpeg, there is a noticeable glitch in sound due to the fact that there is not audio at the last 50msec of every fragment.

    My audio AVAssetWriterInput’s settings are the following :

    static let audioSettings: [NSObject : AnyObject]! =
    [
       AVFormatIDKey : NSNumber(integer: kAudioFormatMPEG4AAC),
       AVNumberOfChannelsKey : NSNumber(integer: 1),
       AVSampleRateKey : NSNumber(int: 44100),
       AVEncoderBitRateKey : NSNumber(int: 64000),
    ]

    As such, I encode 44 audio sample buffers every second. They are all being successfully appended.

    Further resources

    Here’s a waveform display of the audio stream after concatenating the mp4 fragments

    Waveform of audio stream

     !! Note that my fragments are about 2secs in length.
     !! Note that I’m focusing on audio since video frames are extremely smooth when jumping from one fragment to another.

    Any idea as to what is causing this ? I can provide further code or info if needed.

  • How to estimate bandwidth / speed requirements for real-time streaming video ?

    19 juin 2016, par Vivek Seth

    For a project I’m working on, I’m trying to stream video to an iPhone through its headphone jack. My estimated bitrate is about 200kbps (If i’m wrong about this, please ignore that).

    I’d like to squeeze as much performance out of this bitrate as possible and sound is not important for me, only video. My understanding is that to stream a a real-time video I will need to encode it with some codec on-the-fly and send compressed frames to the iPhone for it to decode and render. Based on my research, it seems that H.265 is one of the most space efficient codecs available so i’m considering using that.

    Assuming my basic understanding of live streaming is correct, how would I estimate the FPS I could achieve for a given resolution using the H.265 codec ?

    The best solution I can think of it to take a video file, encode it with H.265 and trim it to 1 minute of length to see how large the file is. The issue I see with this approach is that I think my calculations would include some overhead from the video container format (AVI, MKV, etc) and from the audio channels that I don’t care about.