Recherche avancée

Médias (1)

Mot : - Tags -/intégration

Autres articles (96)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

Sur d’autres sites (11727)

  • How to create a timelapse directly to MP4 with FFMPEG by adding JPEG image data every N seconds ?

    28 août 2019, par suromark

    I am trying to create a bash script with FFMPEG on an OctoPi 3d printer controller that periodically appends new frames to an MP4 timelapse output on the fly (e.g. compress/add one frame every N seconds instead of once the print is finished, processing a folder with hundreds or thousands of images, to spread the required video calculations over the whole print time where the Raspberry Pi is mostly idle anyway).

    I’d like to reduce the storage overhead for the JPEG files and the delay upon finish.

    So far I’ve set up a bash script that starts on boot (via rc.local), then monitors the Octoprint API for signs of activity every 5 seconds (while true ... sleep(5) ) with a second watchdog script. Once the progress value is non-false, and the print head temperature is above 120 °C, the script polls the localhost URL of the webcam for a JPEG file that is timestamped and stored in a folder with the current print job’s name (taken from the API). Once the print is completed, I scp the files to my laptop where I use FFMPEG to convert them to MP4.

    Ideally I’d like to keep the monitoring script behaving as it does, but instead of writing out JPEGs I’d like to pipe the data into FFMPEG (once every N seconds) upon which FFMPEG should process it as new single frame of its stream-in-progress, writing out the data once a new GOP is complete (or finish the last GOP in the buffer once a kill signal arrives).

    For this I most likely need to start FFMPEG from the main loop once a new job starts (to set the output file) then establish some pipe structure to send the JPEG frames, I guess ?

    So far I’ve not managed to find/google any example for this (or a similar workflow), though I think it’s not that exotic an use case...?

    Edit to add :
    This is my current FFMPEG bash command :

    ffmpeg \
    -framerate 30 \
    -pattern_type glob \
    -i '*.jpg' \
    -c:v libx264 \
    -vf "normalize=blackpt=black:whitept=white:smoothing=50" \
    -pix_fmt yuv420p \
    ../"$outname""$outdate"_lapse.mp4

    So far, I’ve managed to get FFMPEG to read from a named pipe, but it saves the output and stops after the first frame is through the pipe. I’d like to tell it to keep reading/waiting from the pipe until it receives a kill (or other) signal :

    mkfifo /tmp/testpipe

    ffmpeg -y -framerate 30 -pattern_type glob -f image2pipe -i /tmp/testpipe -c:v libx264 -pix_fmt yuv420p "pipetestout.mp4"

    and from another terminal :

    cat some-image.jpg > /tmp/testpipe

    I think I’m close (but no cigar yet) ...

  • Capturing audio data (using javascript) and uploading on a server as MP3

    4 septembre 2018, par Michel

    Following a number of resources on the internet, I am trying to build a simple web page, where I can go to record something (my voice), then make a mp3 file out of the recording and finally upload that file to a server.

    At this point I can do the recording and also play back, but I haven’t gone as far as uploading, it seems like I cannot even make an mp3 file locally.
    Can someone tell me what I am doing wrong, or in the wrong order ?

    Below is all the code I have at this point.

       
       
       
       


    <div>
       <h2>Audio record and playback</h2>
       <p>
           <button></button></p><h3>Start</h3>
           <button disabled="disabled"><h3>Stop</h3></button>
           <audio controls="controls"></audio>
           <a></a>
       
    </div>

    <code class="echappe-js">&lt;script&gt;<br />
     var player = document.getElementById('player');<br />
    <br />
     var handleSuccess = function(stream) {<br />
       rec = new MediaRecorder(stream);<br />
    <br />
       rec.ondataavailable = e =&gt; {<br />
           audioChunks.push(e.data);<br />
           if (rec.state == &quot;inactive&quot;) {<br />
               let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
               player.src = URL.createObjectURL(blob);<br />
               player.controls=true;<br />
               player.autoplay=true;<br />
               // audioDownload.href = player.src;<br />
               // audioDownload.download = 'sound.data';<br />
               // audioDownload.innerHTML = 'Download';<br />
               mp3Build();<br />
           }<br />
       }<br />
    <br />
       player.src = stream;<br />
     };<br />
    <br />
     navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
         .then(handleSuccess);<br />
    <br />
    startRecord.onclick = e =&gt; {<br />
     startRecord.disabled = true;<br />
     stopRecord.disabled=false;<br />
     audioChunks = [];<br />
     rec.start();<br />
    }<br />
    <br />
    stopRecord.onclick = e =&gt; {<br />
     startRecord.disabled = false;<br />
     stopRecord.disabled=true;<br />
     rec.stop();<br />
    }<br />
    <br />
    <br />
    var ffmpeg = require('ffmpeg');<br />
    <br />
    function mp3Build() {<br />
    try {<br />
       var process = new ffmpeg('sound.data');<br />
       process.then(function (audio) {<br />
           // Callback mode.<br />
           audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
               if (!error) {<br />
                   console.log('Audio file: ' + file);<br />
           audioDownload.href = player.src;<br />
           audioDownload.download = 'sound.mp3';<br />
           audioDownload.innerHTML = 'Download';<br />
         } else {<br />
           console.log('Error-fnExtractSoundToMP3: ' + error);<br />
         }<br />
           });<br />
       }, function (err) {<br />
           console.log('Error: ' + err);<br />
       });<br />
    } catch (e) {<br />
       console.log(e.code);<br />
       console.log(e.msg);<br />
    }<br />
    }<br />
    <br />
    &lt;/script&gt;

    When I try to investigate and see what is happening using the Debugger inside the Web Console ; on the line :

    var process = new ffmpeg('sound.data');

    I get this message :

    Paused on exception
    TypeError ffmpeg is not a contructor.

    And on the line :

    var ffmpeg = require('ffmpeg');

    I get this message :

    Paused on exception
    ReferenceError require is not defined.

    Beside when I watch the expression ffmpeg, I can see :

    ffmpeg: undefined

    After some further investigations, and using browserify I use the following code :

       
       
       
       


    <div>
       <h2>Audio record and playback</h2>
       <p>
           <button></button></p><h3>Start</h3>
           <button disabled="disabled"><h3>Stop</h3></button>
           <audio controls="controls"></audio>
           <a></a>
       
    </div>

    <code class="echappe-js">&lt;script src='http://stackoverflow.com/feeds/tag/bundle.js'&gt;&lt;/script&gt;
    &lt;script&gt;<br />
     var player = document.getElementById('player');<br />
    <br />
     var handleSuccess = function(stream) {<br />
       rec = new MediaRecorder(stream);<br />
    <br />
       rec.ondataavailable = e =&gt; {<br />
           if (rec.state == &quot;inactive&quot;) {<br />
               let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});<br />
               //player.src = URL.createObjectURL(blob);<br />
               //player.srcObject = URL.createObjectURL(blob);<br />
               //player.srcObject = blob;<br />
               player.srcObject = stream;<br />
               player.controls=true;<br />
               player.autoplay=true;<br />
               // audioDownload.href = player.src;<br />
               // audioDownload.download = 'sound.data';<br />
               // audioDownload.innerHTML = 'Download';<br />
               mp3Build();<br />
           }<br />
       }<br />
    <br />
       //player.src = stream;<br />
       player.srcObject = stream;<br />
     };<br />
    <br />
     navigator.mediaDevices.getUserMedia({audio:true/*, video: false */})<br />
         .then(handleSuccess);<br />
    <br />
    startRecord.onclick = e =&gt; {<br />
     startRecord.disabled = true;<br />
     stopRecord.disabled=false;<br />
     audioChunks = [];<br />
     rec.start();<br />
    }<br />
    <br />
    stopRecord.onclick = e =&gt; {<br />
     startRecord.disabled = false;<br />
     stopRecord.disabled=true;<br />
     rec.stop();<br />
    }<br />
    <br />
    <br />
    var ffmpeg = require('ffmpeg');<br />
    <br />
    function mp3Build() {<br />
    try {<br />
       var process = new ffmpeg('sound.data');<br />
       process.then(function (audio) {<br />
           // Callback mode.<br />
           audio.fnExtractSoundToMP3('sound.mp3', function (error, file) {<br />
               if (!error) {<br />
                   console.log('Audio file: ' + file);<br />
           //audioDownload.href = player.src;<br />
           audioDownload.href = player.srcObject;<br />
           audioDownload.download = 'sound.mp3';<br />
           audioDownload.innerHTML = 'Download';<br />
         } else {<br />
           console.log('Error-fnExtractSoundToMP3: ' + error);<br />
         }<br />
           });<br />
       }, function (err) {<br />
           console.log('Error: ' + err);<br />
       });<br />
    } catch (e) {<br />
       console.log(e.code);<br />
       console.log(e.msg);<br />
    }<br />
    }<br />
    <br />
    &lt;/script&gt;

    That solved the problem of :

    the expression ffmpeg being: undefined

    But the play back is no longer working. I may not be doing the right thing with player.srcObject and maybe some other things too.

    When I use this line :

    player.srcObject = URL.createObjectURL(blob);

    I get this message :

    Paused on exception
    TypeError: Value being assigned to HTMLMediaElement.srcObject is not an object.

    And when I use this line :

    player.srcObject = blob;

    I get this message :

    Paused on exception
    TypeError: Value being assigned to HTMLMediaElement.srcObject does not implement interface MediaStream.

    Finally, if I use this :

    player.srcObject = stream;

    I do not get any error message but the voice recording still does not work.

  • how to write image with yuv420 format data with PIL or something like that

    16 avril 2021, par nathan wu

    I have a video with yuv420p pixel format. &#xA;At first I tried to read each frame's bytes of it using pipe and pixel format as rgb24. And I used PIL to make image of it.&#xA;However, the frames read with format of rgb24 seem to lose a little bit of quality.

    &#xA;&#xA;

    Here is the command of reading frame with rgb24 pixel format :

    &#xA;&#xA;

        ffmpeg -y -i input.mp4 -vcodec rawvideo -pix_fmt rgb24 -an -r 25 -f rawvideo pipe:1&#xA;    frame_data = self.process.stdout.read(1920*1080*3)&#xA;

    &#xA;&#xA;

    Then I tried to read it with yuv420p pixel format.

    &#xA;&#xA;

        ffmpeg -y -i input.mp4 -vcodec rawvideo -pix_fmt yuv420p -an -r 25 -f rawvideo pipe:1&#xA;    frame_data = self.process.stdout.read(1920*1080*3/2)&#xA;

    &#xA;&#xA;

    One single frame includes half of the bytes of rgb24 frame. It is 3110400 bytes within a 1920*1080 yuv420p frame. I tossed these data into PIL :

    &#xA;&#xA;

        Image.frombytes(&#x27;YCbCr&#x27;, (1920, 1080), frame_data)&#xA;

    &#xA;&#xA;

    but PIL raise an error of not enough image data.&#xA;I looked up the modes that PIL support to write from bytes, none of it is 12_bit pixels.&#xA;I also tried to transform the yuv data into rgb data, but it took a lot more time than before when is a long video to process.

    &#xA;&#xA;

    Am I doing something wrong ? Is there any way to write an image with raw yuv data without any transform ??

    &#xA;