Recherche avancée

Médias (91)

Autres articles (59)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8716)

  • AVAssetWriter creating mp4 with no sound in last 50msec

    12 août 2015, par Joseph K

    I’m working on a project involving live streaming from the iPhone’s camera.

    To minimize loss during AVAssetWriter finishWriting, I use an array of 2 asset writers and swap them whenever I need to create an mp4 fragment out of the recorded buffers.

    Code responsible for capturing Audio & Video sample buffers

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

       if CMSampleBufferDataIsReady(sampleBuffer) <= 0 {
               println("Skipped sample because it was not ready")
               return
           }

       if captureOutput == audioOutput  {

           if audioWriterBuffers[0].readyForMoreMediaData() {
               if !writers[0].appendAudio(sampleBuffer) { println("Failed to append: \(recordingPackages[0].name). Error: \(recordingPackages[0].outputWriter.error.localizedDescription)") }
               else {
                   writtenAudioFrames++

                   if writtenAudioFrames == framesPerFragment {
                       writeFragment()
                   }
               }
           }
           else {
               println("Skipped audio sample; it is not ready.")
           }
       }

       else if captureOutput == videoOutput {
           //Video sample buffer
           if videoWriterBuffers[0].readyForMoreMediaData() {
               //Call startSessionAtSourceTime if needed
               //Append sample buffer with a source time
           }
       }
    }

    Code responsible for the writing and swapping

    func writeFragment() {
       writtenAudioFrames = 0

       swap(&writers[0], &writers[1])
       if !writers[0].startWriting() {println( "Failed to start OTHER writer writing") }
       else { startTime  = CFAbsoluteTimeGetCurrent() }

       audioWriterBuffers[0].markAsFinished()
       videoWriterBuffers[0].markAsFinished()

       writers[1].outputWriter.finishWritingWithCompletionHandler { () -> Void in
           println("Finish Package record Writing, now Resetting")
           //
           // Handle written MP4 fragment code
           //

           //Reset Writer
           //Basically reallocate it as a new AVAssetWriter with a given URL and MPEG4 file Type and add inputs to it
           self.resetWriter()
       }

    The issue at hand

    The written MP4 fragments are being sent over to a local sandbox server to be analyzed.

    When MP4 fragments are stitched together using FFMpeg, there is a noticeable glitch in sound due to the fact that there is not audio at the last 50msec of every fragment.

    My audio AVAssetWriterInput’s settings are the following :

    static let audioSettings: [NSObject : AnyObject]! =
    [
       AVFormatIDKey : NSNumber(integer: kAudioFormatMPEG4AAC),
       AVNumberOfChannelsKey : NSNumber(integer: 1),
       AVSampleRateKey : NSNumber(int: 44100),
       AVEncoderBitRateKey : NSNumber(int: 64000),
    ]

    As such, I encode 44 audio sample buffers every second. They are all being successfully appended.

    Further resources

    Here’s a waveform display of the audio stream after concatenating the mp4 fragments

    Waveform of audio stream

     !! Note that my fragments are about 2secs in length.
     !! Note that I’m focusing on audio since video frames are extremely smooth when jumping from one fragment to another.

    Any idea as to what is causing this ? I can provide further code or info if needed.

  • FFMPEG, Blur an area of a video using Image Select Areas Plugin

    12 juillet 2016, par Drupalist

    I am building an online video editor. I need to allow the users to blur an area of movies. This must be done graphically, I mean user should be able to select an area of a video, using an screenshot, then the selected area must be blurred. Some thing like this

    enter image description here

    Is there anyway to map this selected area dimension and its distance from borders to the real values that must be applied to the video ?

    I mean four numbers, width, length, top, left will be provided using this plug in and I need to use these numbers to blur an area of videos.

    I take an screenshot of video. In order to keep the aspect ratio, I will fix the width to 800px and let the height to be scaled up/down. It is clear that the width, length, top, left of the selected area of the screenshot is a factor of the width, length, top, left of the area that must be blurred in video. but I don’t know how to get this factor. Besides that I don’t know how to get the video resolution.

    Thanks in advance.


    Update

    This is my PHP code that gets the selected area dimensions and offsets

    $iLeft = $_POST['left'];
    $iTop = $_POST['top'];
    $iWidth = $_POST['width'];
    $iHeight = $_POST['height'];
    exec('ffmpeg -i '.$url.' -filter_complex "[0:v]scale=iw*sar:ih,setsar=1,split[bg][bb];[bb]crop='.$iWidth.'*iw/800:iw*'.$iHeight.'/800:'.$iWidth.'*iw/800:'.$iHeight.'*iw/800,boxblur=10[b0];[bg][b0]overlay='.$iLeft.'*W/800:'.$iTop.'*W/800"" '.$name.' > block.txt 2>&1');

    $iLeft, $iTop, $iWidth, $iHeight are the left, top, width and height of the selected area via the image plugin.

    It blurs many selected areas very well, but areas like this

    enter image description here

    The Image Left, Top, Width, Height is 257,  39.26666259765625,  10,  391

    don’t get blurred. Also a video with Dimension 207x207 didn’t get blurred as well.

  • How to create ffmpeg mosaic with empty cells ?

    7 septembre 2020, par DenVebber

    I try to create video mosaic with ffmpeg.

    


    Two videos with horizontal stack :

    


    ffmpeg -i vid1.mp4 -i vid2.mp4 -filter_complex "[0]scale=-1:1080[v0];[1]scale=-1:1080[v1];[v0][v1]hstack=inputs=2[vmap]" -map "[vmap]" output.mp4

    


    How can I replace vid1.mp4 with black background and save the stack with 2 elements ?
I can add blackvideo.mp4, but it should be easier, right ?