Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (34)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (11015)

  • GStreamer : How to set "stream-number" pad property of mpegtsmux element ? [closed]

    8 novembre 2024, par TishSerg

    Accroding to gst-inspect-1.0 mpegtsmux, mpegtsmux's sink pads have writable stream-number property :

    


    ...
Pad Templates:
  SINK template: 'sink_%d'
    Availability: On request
    Capabilities:
      ...
    Type: GstBaseTsMuxPad
    Pad Properties:
      ...

      stream-number       : stream number
                            flags: readable, writable
                            Integer. Range: 0 - 31 Default: 0


    


    But when I try to set it, GStreamer says there's no such property. The following listing shows I can run multi-stream pipeline without setting that property, but when I add that property it doesn't work.

    


    PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
Use Windows high-resolution clock, precision: 1 ms
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
Redistribute latency...
Redistribute latency...
handling interrupt.9.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:03.773243400
Setting pipeline to NULL ...
Freeing pipeline ...
PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux sink_300::stream-number=1 ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
WARNING: erroneous pipeline: no property "sink_300::stream-number" in element "mpegtsmux"
PS C:\gstreamer\1.0\msvc_x86_64\bin> .\gst-launch-1.0.exe --version
gst-launch-1.0 version 1.24.8
GStreamer 1.24.8
Unknown package origin
PS C:\gstreamer\1.0\msvc_x86_64\bin> .\gst-launch-1.0.exe --version
gst-launch-1.0 version 1.24.9
GStreamer 1.24.9
Unknown package origin
PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux sink_300::stream-number=1 ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
WARNING: erroneous pipeline: no property "sink_300::stream-number" in element "mpegtsmux"


    


    I even updated GStreamer, but still no luck. I tried that because I found news saying there were updates regarding that property :

    


      397 ### MPEG-TS improvements
  398 
  399 -   mpegtsdemux gained support for
  400     -   segment seeking for seamless non-flushing looping, and
  401     -   synchronous KLV
  402 -   mpegtsmux now
  403     -   allows attaching PCR to non-PES streams
  404     -   allows setting of the PES stream number for AAC audio and AVC video streams via a new “stream-number” property on the
  405         muxer sink pads. Currently, the PES stream number is hard-coded to zero for these stream types.


    


    The syntax seems correct (pad_name::pad_prop_name on the element). I ran out of ideas about what I'm doing wrong with that property.

    


    Broader context :

    


    I want to set that property because I want an exact sequence of streams I mixing.

    


    When I feed mpegtsmux with two video streams and one audio stream (from capture devices) without specifying the stream numbers I get them muxed in a random sequence (checking it using ffprobe). Sometimes they are in the desired sequence, but sometimes they aren't. The worst case is when the audio stream is the first stream in the file, so video players get mad when trying to play such .ts file. I have to remux such files using -map key of ffmpeg. If I could set exact stream indices in mpegtsmux (not to be confused with stream PID) I could avoid analyzing actual stream layout and remuxing.

    


    Example of the real layout of the streams (ffprobe output) :

    


    Input #0, mpegts, from '████████████████████████████████████████':
  Duration: 00:20:09.64, start: 3870.816656, bitrate: 6390 kb/s
  Program 1
  Stream #0:2[0x41]: Video: h264 (Baseline) (HDMV / 0x564D4448), yuvj420p(pc, bt709, progressive), 1920x1080, 30 fps, 30 tbr, 90k tbn
  Stream #0:1[0x4b]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, mono, fltp, 130 kb/s
  Program 2
  Stream #0:0[0x42]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(progressive), 720x576, 25 fps, 25 tbr, 90k tbn


    


    You can see 3 streams :

    


      

    • FullHD video with PID 0x41 (defined by me as mpegtsmux0.sink_65) has index 2 while I want it to be 0
    • 


    • PAL video with PID 0x42 (defined by me as mpegtsmux0.sink_66) has index 0 while I want it to be 1
    • 


    • Audio with PID 0x4b (defined by me as mpegtsmux0.sink_75) has index 1 while I want it to be 2
    • 


    


  • Unable to set "stream-number" pad property of mpegtsmux

    8 novembre 2024, par TishSerg

    Accroding to gst-inspect-1.0 mpegtsmux, mpegtsmux's sink pads have writable stream-number property :

    


    ...
Pad Templates:
  SINK template: 'sink_%d'
    Availability: On request
    Capabilities:
      ...
    Type: GstBaseTsMuxPad
    Pad Properties:
      ...

      stream-number       : stream number
                            flags: readable, writable
                            Integer. Range: 0 - 31 Default: 0


    


    But when I try to set it, GStreamer says there's no such property. The following listing shows I can run multi-stream pipeline without setting that property, but when I add that property it doesn't work.

    


    PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
Use Windows high-resolution clock, precision: 1 ms
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
Redistribute latency...
Redistribute latency...
handling interrupt.9.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:03.773243400
Setting pipeline to NULL ...
Freeing pipeline ...
PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux sink_300::stream-number=1 ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
WARNING: erroneous pipeline: no property "sink_300::stream-number" in element "mpegtsmux"
PS C:\gstreamer\1.0\msvc_x86_64\bin> .\gst-launch-1.0.exe --version
gst-launch-1.0 version 1.24.8
GStreamer 1.24.8
Unknown package origin
PS C:\gstreamer\1.0\msvc_x86_64\bin> .\gst-launch-1.0.exe --version
gst-launch-1.0 version 1.24.9
GStreamer 1.24.9
Unknown package origin
PS C:\gstreamer\1.0\msvc_x86_64\bin> ./gst-launch-1.0 mpegtsmux name=mux sink_300::stream-number=1 ! udpsink host=192.168.144.255 port=5600 sync=no `
>> videotestsrc is-live=true pattern=ball ! "video/x-raw, width=1920, height=1080, profile=main" ! x264enc ! mux.sink_300 `
>> videotestsrc is-live=true ! "video/x-raw, width=720, height=576" ! x264enc ! mux.sink_301
WARNING: erroneous pipeline: no property "sink_300::stream-number" in element "mpegtsmux"


    


    I even updated GStreamer, but still no luck. I tried that because I found news saying there were updates regarding that property :

    


      397 ### MPEG-TS improvements
  398 
  399 -   mpegtsdemux gained support for
  400     -   segment seeking for seamless non-flushing looping, and
  401     -   synchronous KLV
  402 -   mpegtsmux now
  403     -   allows attaching PCR to non-PES streams
  404     -   allows setting of the PES stream number for AAC audio and AVC video streams via a new “stream-number” property on the
  405         muxer sink pads. Currently, the PES stream number is hard-coded to zero for these stream types.


    


    The syntax seems correct (pad_name::pad_prop_name on the element). I ran out of ideas about what I'm doing wrong with that property.

    


    Broader context :

    


    I want to set that property because I want an exact sequence of streams I mixing.

    


    When I feed mpegtsmux with two video streams and one audio stream (from capture devices) without specifying the stream numbers I get them muxed in a random sequence (checking it using ffprobe). Sometimes they are in the desired sequence, but sometimes they aren't. The worst case is when the audio stream is the first stream in the file, so video players get mad when trying to play such .ts file. I have to remux such files using -map key of ffmpeg. If I could set exact stream indices in mpegtsmux (not to be confused with stream PID) I could avoid analyzing actual stream layout and remuxing.

    


    Example of the real layout of the streams (ffprobe output) :

    


    Input #0, mpegts, from '████████████████████████████████████████':
  Duration: 00:20:09.64, start: 3870.816656, bitrate: 6390 kb/s
  Program 1
  Stream #0:2[0x41]: Video: h264 (Baseline) (HDMV / 0x564D4448), yuvj420p(pc, bt709, progressive), 1920x1080, 30 fps, 30 tbr, 90k tbn
  Stream #0:1[0x4b]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, mono, fltp, 130 kb/s
  Program 2
  Stream #0:0[0x42]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(progressive), 720x576, 25 fps, 25 tbr, 90k tbn


    


    You can see 3 streams :

    


      

    • FullHD video with PID 0x41 (defined by me as mpegtsmux0.sink_65) has index 2 while I want it to be 0
    • 


    • PAL video with PID 0x42 (defined by me as mpegtsmux0.sink_66) has index 0 while I want it to be 1
    • 


    • Audio with PID 0x4b (defined by me as mpegtsmux0.sink_75) has index 1 while I want it to be 2
    • 


    


  • Issue in recording video

    16 novembre 2015, par human123

    I am trying to record video in 480*480 resolution like in vine using javacv. As a starting point I used the sample provided in https://github.com/bytedeco/javacv/blob/master/samples/RecordActivity.java Video is getting recorded (but not in the desired resolution) and saved.

    But the issue is that 480*480 resolution is not supported natively in android. So some pre processing needs to be done to get the video in desired resolution.

    So once I was able to record video using code sample provided by javacv, next challenge was on how to pre process the video. On research it was found that efficient cropping is possible when final image width required is same as recorded image width. Such a solution was provided in the SO question,Recording video on Android using JavaCV (Updated 2014 02 17). I changed onPreviewFrame method as suggested in that answer.

       @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           imageWidth = 640;
           imageHeight = 480    
           int finalImageHeight = 360;
           if (yuvImage != null && recording) {
               ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
               final int startY = imageWidth*(imageHeight-finalImageHeight)/2;
               final int lenY = imageWidth*finalImageHeight;
               bb.put(data, startY, lenY);
               final int startVU = imageWidth*imageHeight + imageWidth*(imageHeight-finalImageHeight)/4;
               final int lenVU = imageWidth* finalImageHeight/2;
               bb.put(data, startVU, lenVU);
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }
    }

    Please also note that this solution was provided for an older version of javacv. The resulting video had a yellowish overlay covering 2/3rd part. Also there was empty section on left side as the video was not cropped correctly.

    So my question is what is the most appropriate solution for cropping videos using latest version of javacv ?

    Code after making change as suggested by Alex Cohn

       @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           imageWidth = 640;
           imageHeight = 480;      
           destWidth = 480;

           if (yuvImage != null && recording) {
               ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
               int start = 2*((imageWidth-destWidth)/4); // this must be even
               for (int row=0; row2; row++) {
                   bb.put(data, start, destWidth);
                   start += imageWidth;
               }
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }

    Screen shot from video generated with this code (destWidth 480) is

    video resolution 480*480

    Next I tried capturing a video with destWidth speciified as 639. The result is

    639*480

    When destWidth is 639 video is repeating contents twice. When it is 480, contents are repeated 5 times and the green overlay and distortion is more.

    Also When the destWidth = imageWidth, video is captured properly. ie, for 640*480 there is no repetition of video contents and no green overlay.

    Converting frame to IplImage

    When this question was asked first, I missed to mention that the record method in FFmpegFrameRecorder is now accepting object of type Frame whereas earlier it was IplImage object. So I tried to apply Alex Cohn’s solution by converting Frame to IplImage.

    //---------------------------------------
    // initialize ffmpeg_recorder
    //---------------------------------------
    private void initRecorder() {

       Log.w(LOG_TAG,"init recorder");

       imageWidth = 640;
       imageHeight = 480;

       if (RECORD_LENGTH > 0) {
           imagesIndex = 0;
           images = new Frame[RECORD_LENGTH * frameRate];
           timestamps = new long[images.length];
           for (int i = 0; i < images.length; i++) {
               images[i] = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
               timestamps[i] = -1;
           }
       } else if (yuvImage == null) {
           yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
           Log.i(LOG_TAG, "create yuvImage");
           OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
           yuvIplimage = converter.convert(yuvImage);

       }

       Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link);
       recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
       recorder.setFormat("flv");
       recorder.setSampleRate(sampleAudioRateInHz);
       // Set in the surface changed method
       recorder.setFrameRate(frameRate);

       Log.i(LOG_TAG, "recorder initialize success");

       audioRecordRunnable = new AudioRecordRunnable();
       audioThread = new Thread(audioRecordRunnable);
       runAudioThread = true;
    }



    @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           int destWidth = 640;

           if (yuvIplimage != null && recording) {
               ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer
               int start = 2*((imageWidth-destWidth)/4); // this must be even
               for (int row=0; row2; row++) {
                   bb.put(data, start, destWidth);
                   start += imageWidth;
               }
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }

    But the videos generated with this method contained only green frames.