Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (60)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (7215)

  • gstreamer receive and reassamble h.264 video over udp

    8 mai 2019, par Ragnar Harðarson

    I’m trying to capture a video stream from a Tello drone with gstreamer

    I’ve tried with a gstreamer pipeline of

    gst-launch-1.0 -v udpsrc buffer-size=622080 skip-first-bytes=2 port=6038 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW,width=(string)960, height=(string)720, payload=(int)96, a-framerate=25" \
       ! queue \
       ! rtpvrawdepay \
       ! autovideosink

    But this outputs (with export GST_DEBUG=*:3)

    WARNING: from element /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0: Could not decode stream.
    Additional debug info:
    gstrtpbasedepayload.c(506): GstFlowReturn gst_rtp_base_depayload_handle_buffer(GstRTPBaseDepayload *, GstRTPBaseDepayloadClass *, GstBuffer *) (): /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0:

    Information about the stream can be seen here : https://github.com/m6c7l/dji-ryze-tello/tree/master/example/video

    The video can be piped into ffmpeg with the following command ffmpeg -i - -f image2pipe -pix_fmt rgb24 -vcodec rawvideo -

    I’m missing a gstreamer pipeline that can piece these NALs together into a h264 frame.

    Update : output with playbin

    I tried running the command with export GST_DEBUG=*:3 and I’m getting the following output repeatedly :

    gst-launch-1.0 -v playbin uri=udp://0.0.0.0:6038
    Setting pipeline to PAUSED ...
    Pipeline is live and does not need PREROLL ...
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: ring-buffer-max-size = 0
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: buffer-size = -1
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: buffer-duration = -1
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: use-buffering = false
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: download = false
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: uri = udp://0.0.0.0:6038
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: connection-speed = 0
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: source = "\(GstUDPSrc\)\ source"
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = video/x-h264, stream-format=(string)byte-stream
    /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream
    0:01:19.923679000  3202 0x7f9639857320 WARN               h264parse gsth264parse.c:1351:gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 1 Slice, Size: 16113 will be dropped
    0:01:19.924320000  3202 0x7f9639857320 WARN               h264parse gsth264parse.c:1351:gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 1 Slice, Size: 17237 will be dropped
    0:01:19.925063000  3202 0x7f9639857320 WARN       codecparsers_h264 gsth264parser.c:2039:gst_h264_parse_pps: failed to read SE
    0:01:19.925075000  3202 0x7f9639857320 WARN       codecparsers_h264 gsth264parser.c:2046:gst_h264_parse_pps: error parsing "Picture parameter set"
    0:01:19.925081000  3202 0x7f9639857320 WARN               h264parse gsth264parse.c:882:gst_h264_parse_process_nal:<h264parse0> failed to parse PPS:
    0:01:19.925089000  3202 0x7f9639857320 WARN               h264parse gsth264parse.c:1351:gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 8 PPS, Size: 5 will be dropped
    0:01:19.925106000  3202 0x7f9639857320 WARN       codecparsers_h264 gsth264parser.c:2039:gst_h264_parse_pps: failed to read SE
    </h264parse0></h264parse0></h264parse0></h264parse0>

    I’ll try playing around with the options and report back.

  • How to save/encode recorded raw PCM Data as AAC/MP4 format file in Android

    28 janvier 2015, par INVISIBLE

    i want to save recorder pcm data as aac/mp4 format file.
    i am using AudioRecord class for recording audio in android. i have success fully saved it as wave file by adding a wave header to raw data. but dont know how to save it as aac/mp4 file, because aac/mp4 format is compressed then wave.
    the methods i am using for saving pcm data as wave is pasted below.

    recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
               SavedSampleRate, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING,
               bufferSize);
    recorder.startRecording();
    isRecording = true;

    isRecording = true;

    recordingThread = new Thread(new Runnable() {
       @Override
       public void run() {
           writeAudioDataToFile();
       }
    }, "AudioRecorder Thread");

    recordingThread.start();

    second

    private void writeAudioDataToFile() {

       byte data[] = new byte[bufferSize];
       // short sData[] = new short[bufferSize];
       String filename = getTempFilename();
       FileOutputStream os = null;

       try {
           os = new FileOutputStream(filename);
       } catch (Exception e) {
           e.printStackTrace();
       }

       int read = 0;

       if (null != os) {
           while (isRecording) {
               double sum = 0;
               read = recorder.read(data, 0, bufferSize);

               if (AudioRecord.ERROR_INVALID_OPERATION != read) {
                   try {

                       synchronized (this) {


                           // Necessary in order to convert negative shorts!
                           final int USHORT_MASK = (1 &lt;&lt; 16) - 1;

                           final ByteBuffer buf = ByteBuffer.wrap(data).order(
                                   ByteOrder.LITTLE_ENDIAN);

                           final ByteBuffer newBuf = ByteBuffer.allocate(
                                   data.length).order(ByteOrder.LITTLE_ENDIAN);

                           int sample;

                           while (buf.hasRemaining()) {



                               short shortSample = buf.getShort();
                               sample = (int) shortSample &amp; USHORT_MASK;



                               sample = sample * db_value_global;
                               sample = mRmsFilterSetting.filter
                                       .apply((((int) 0) | shortSample)
                                               * db_value_global);



                               newBuf.putShort((short) sample);
                           }

                           data = newBuf.array();

                           os.write(data);





                       }

                   } catch (Exception e) {
                       e.printStackTrace();
                   }
               }
           }

           try {
               os.close();
           } catch (Exception e) {
               e.printStackTrace();
           }
       }
    }

    and finally saving it as

    private void copyWaveFile(ArrayList<string> inFilename, String outFilename) {
       FileInputStream[] in = null;
       FileOutputStream out = null;
       long totalAudioLen = 0;
       long totalDataLen = totalAudioLen + 36;
       long longSampleRate = SavedSampleRate;
       int channels = 2;
       long byteRate = RECORDER_BPP * SavedSampleRate * channels / 8;

       byte[] data = new byte[bufferSize];

       try {
           out = new FileOutputStream(outFilename);

           in = new FileInputStream[inFilename.size()];

           for (int i = 0; i &lt; in.length; i++) {
               in[i] = new FileInputStream(inFilename.get(i));
               totalAudioLen += in[i].getChannel().size();
           }

           totalDataLen = totalAudioLen + 36;

           WriteWaveFileHeader(out, totalAudioLen, totalDataLen,
                   longSampleRate, channels, byteRate);

           for (int i = 0; i &lt; in.length; i++) {
               while (in[i].read(data) != -1) {
                   out.write(data);
               }

               in[i].close();
           }

           out.close();
       } catch (Exception e) {
           e.printStackTrace();
       }
    }



    private void WriteWaveFileHeader(FileOutputStream out, long totalAudioLen,
           long totalDataLen, long longSampleRate, int channels, long byteRate)
           throws IOException {

       byte[] header = new byte[44];

       header[0] = 'R'; // RIFF/WAVE header
       header[1] = 'I';
       header[2] = 'F';
       header[3] = 'F';
       header[4] = (byte) (totalDataLen &amp; 0xff);
       header[5] = (byte) ((totalDataLen >> 8) &amp; 0xff);
       header[6] = (byte) ((totalDataLen >> 16) &amp; 0xff);
       header[7] = (byte) ((totalDataLen >> 24) &amp; 0xff);
       header[8] = 'W';
       header[9] = 'A';
       header[10] = 'V';
       header[11] = 'E';
       header[12] = 'f'; // 'fmt ' chunk
       header[13] = 'm';
       header[14] = 't';
       header[15] = ' ';
       header[16] = 16; // 4 bytes: size of 'fmt ' chunk
       header[17] = 0;
       header[18] = 0;
       header[19] = 0;
       header[20] = 1; // format = 1
       header[21] = 0;
       header[22] = (byte) channels;
       header[23] = 0;
       header[24] = (byte) (longSampleRate &amp; 0xff);
       header[25] = (byte) ((longSampleRate >> 8) &amp; 0xff);
       header[26] = (byte) ((longSampleRate >> 16) &amp; 0xff);
       header[27] = (byte) ((longSampleRate >> 24) &amp; 0xff);
       header[28] = (byte) (byteRate &amp; 0xff);
       header[29] = (byte) ((byteRate >> 8) &amp; 0xff);
       header[30] = (byte) ((byteRate >> 16) &amp; 0xff);
       header[31] = (byte) ((byteRate >> 24) &amp; 0xff);
       header[32] = (byte) (2 * 16 / 8); // block align
       header[33] = 0;
       header[34] = RECORDER_BPP; // bits per sample
       header[35] = 0;
       header[36] = 'd';
       header[37] = 'a';
       header[38] = 't';
       header[39] = 'a';
       header[40] = (byte) (totalAudioLen &amp; 0xff);
       header[41] = (byte) ((totalAudioLen >> 8) &amp; 0xff);
       header[42] = (byte) ((totalAudioLen >> 16) &amp; 0xff);
       header[43] = (byte) ((totalAudioLen >> 24) &amp; 0xff);

       out.write(header, 0, 44);
    }
    </string>

    in this piece of code i am recording small PCM files with AudioRecord and saving them as wave file by adding wave header.

    is there any step by step tutorial for how to save pcm data as mp4/aac file.

    thanks in advance.

  • Nodejs ffmpeg "The input file path can not be empty" error, but files exist

    29 octobre 2022, par 0szi

    I'm trying to merge an audio file with a video file from the same source (Youtube)

    &#xA;

    In the following code I first read in the console parameters wirh commander then i define the videoOutput dir and download the highset res. video from youtube with node-ytdl-core. After that I download the audio for the video. and in the callback of the video.on("end", ....)&#xA;i call the function merge()

    &#xA;

    const path = require(&#x27;path&#x27;);&#xA;const fs = require(&#x27;fs&#x27;);&#xA;const readline = require("readline");&#xA;const ytdl = require(&#x27;ytdl-core&#x27;);&#xA;const { program } = require(&#x27;commander&#x27;);&#xA;const ffmpeg = require(&#x27;ffmpeg&#x27;);&#xA;&#xA;program&#xA;    .option("--url, --url <url>", "Youtube video url")&#xA;    .option("--name, --name <name>", "Name of the video in hard drive")&#xA;&#xA;program.parse(process.argv);&#xA;&#xA;&#xA;const options = program.opts();&#xA;let url = options.url;&#xA;let name = options.name;&#xA;&#xA;let videoOutput = path.resolve(`./video${name}.mp4`);&#xA;&#xA;let video = ytdl(url, {&#xA;  quality: "highestvideo"&#xA;});&#xA;&#xA;let starttime = 0;&#xA;&#xA;video.pipe(fs.createWriteStream(videoOutput));&#xA;&#xA;video.once(&#x27;response&#x27;, () => {&#xA;  starttime = Date.now();&#xA;});&#xA;&#xA;video.on(&#x27;progress&#x27;, (chunkLength, downloaded, total) => {&#xA;    const percent = downloaded / total;&#xA;    const downloadedMinutes = (Date.now() - starttime) / 1000 / 60;&#xA;    const estimatedDownloadTime = (downloadedMinutes / percent) - downloadedMinutes;&#xA;    readline.cursorTo(process.stdout, 0);&#xA;    process.stdout.write(`${(percent * 100).toFixed(2)}% downloaded `);&#xA;    process.stdout.write(`(${(downloaded / 1024 / 1024).toFixed(2)}MB of ${(total / 1024 / 1024).toFixed(2)}MB)\n`);&#xA;    process.stdout.write(`running for: ${downloadedMinutes.toFixed(2)}minutes`);&#xA;    process.stdout.write(`, estimated time left: ${estimatedDownloadTime.toFixed(2)}minutes `);&#xA;    readline.moveCursor(process.stdout, 0, -1);&#xA;  });&#xA;&#xA;  video.on(&#x27;end&#x27;, () => {&#xA;    process.stdout.write(&#x27;\n\n&#x27;);&#xA;  });&#xA;&#xA;&#xA;//   repeat for audio&#xA;video = ytdl(url, {&#xA;  quality: "highestaudio"&#xA;});&#xA;  &#xA;starttime = 0;&#xA;&#xA;let audioOutput = path.resolve(`./audio${name}.mp3`);&#xA;&#xA;video.pipe(fs.createWriteStream(audioOutput));&#xA;&#xA;video.once(&#x27;response&#x27;, () => {&#xA;  starttime = Date.now();&#xA;});&#xA;&#xA;video.on(&#x27;progress&#x27;, (chunkLength, downloaded, total) => {&#xA;    const percent = downloaded / total;&#xA;    const downloadedMinutes = (Date.now() - starttime) / 1000 / 60;&#xA;    const estimatedDownloadTime = (downloadedMinutes / percent) - downloadedMinutes;&#xA;    readline.cursorTo(process.stdout, 0);&#xA;    process.stdout.write(`${(percent * 100).toFixed(2)}% downloaded `);&#xA;    process.stdout.write(`(${(downloaded / 1024 / 1024).toFixed(2)}MB of ${(total / 1024 / 1024).toFixed(2)}MB)\n`);&#xA;    process.stdout.write(`running for: ${downloadedMinutes.toFixed(2)}minutes`);&#xA;    process.stdout.write(`, estimated time left: ${estimatedDownloadTime.toFixed(2)}minutes `);&#xA;    readline.moveCursor(process.stdout, 0, -1);&#xA;  });&#xA;&#xA;&#xA;function merge(){&#xA;    ffmpeg()&#xA;    .input("./videotest.mp4") //your video file input path&#xA;    .input("./audiotest.mp3") //your audio file input path&#xA;    .output("./finished.mp4") //your output path&#xA;    .outputOptions([&#x27;-map 0:v&#x27;, &#x27;-map 1:a&#x27;, &#x27;-c:v copy&#x27;, &#x27;-shortest&#x27;])&#xA;    .on(&#x27;start&#x27;, (command) => {&#xA;      console.log(&#x27;TCL: command -> command&#x27;, command)&#xA;    })&#xA;    .on(&#x27;error&#x27;, (error) => console.log("errrrr",error))&#xA;    .on(&#x27;end&#x27;,()=>console.log("Completed"))&#xA;    .run()  &#xA;}&#xA;&#xA;video.on(&#x27;end&#x27;, () => {&#xA;  process.stdout.write(&#x27;\n\n&#x27;);&#xA;  merge();&#xA;});&#xA;&#xA;</name></url>

    &#xA;

    But even though the files are there ffmpeg throws me this error :&#xA;enter image description here

    &#xA;

    I also tried this in the video-end callback, because maybe the audio is finished downloading before the video, still doesn't work. I've also tried to rename the outputDirs for the files and keep the old files and rerun the script so the files are 100% there. Still doesn't work.

    &#xA;

    I have also tried absolute paths ("C :/..." also with backslash "C :\...") but I still get the error message that the input file path can't be empty.

    &#xA;

    Appreciate any piece of advise or help !

    &#xA;