Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (71)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (7891)

  • Revision 39100 : afficher un peu plus d’infos : - nombre d’utilisateurs différents - nombre ...

    29 juin 2010, par kent1@… — Log

    afficher un peu plus d’infos : - nombre d’utilisateurs différents - nombre de jours différents - nombre d’actions différentes Permettre de trier par action via une liste

  • ffmpeg real-time buffer too full or near too full frame dropped, I even tried increasing rtbufsize. What could be going wrong ?

    21 mai 2024, par Ali Azlan

    We have a software where we capture the stream from the camera connected to the laptop or device using ffmpeg python,

    


                     ffmpeg
                .input(video, s='640x480', **self.args) //tried with rtbufsize=1000M (enough I suupose/ also sometimes the error does not occur even on default rtbufsize which is around 3MB)
                .output('pipe:', format='rawvideo', pix_fmt='rgb24')
                .overwrite_output()
                .run_async(pipe_stdout=True) 


    


    majority of the times when I start the software like the software is still initiating we receive the following error, I have also received this error when the software has initiated fully and completely and it is running from a long time like after 12hrs or more.

    


    


    Error : [dshow @ 000002248916e240] real-time buffer [Integrated
Camera] [video input] too full or near too full (80% of size : 3041280
[rtbufsize parameter]) ! frame dropped !
Last message repeated 1 times [dshow @ 000002248916e240] real-time buffer [Integrated Camera] [video input] too full or near too full
(101% of size : 3041280 [rtbufsize parameter]) ! frame dropped !

    


    


    What are the things we possibly might be doing wrong ?

    


    Edit 1 :

    


    below is the code to consume the frame captured in the video using ffmpeg

    


        def frame_reader(self):
    while True:
        in_bytes = self.process.stdout.read(self.width * self.height * 3)
        if not in_bytes:
            break
        try:
            in_frame = (
                np
                    .frombuffer(in_bytes, np.uint8)
                    .reshape([self.height, self.width, 3])
            )
            frame = cv2.resize(in_frame, (640, 480))
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        except Exception as e:
            logger.error(e, exc_info=True)
            in_frame = (
                np
                    .frombuffer(in_bytes, np.uint8)
            )

        if not self.q.empty():
            try:
                self.q.get_nowait()
            except queue.Empty:
                pass
        self.q.put(frame)


    


  • ffmpeg : Apply a time-based expression to control lowpass, highpass, aphaser, or aecho [closed]

    17 mai 2024, par Wes Modes

    I am constructing complexFilters to procedurally generate audio with ffmpeg via fluent-ffmpeg. The interesting part of my fluent-ffmpeg complex filters look like :

    


    {
   "filter": "volume",
   "options": {
       "volume": "min(1, max(0, ((cos(PI * t * 1 / 13) * 1 + cos(PI * t * 1 / 7) * 0.5 + cos(PI * t * 1 / 3) * 0.25) +
-0.5 ) * 0.75 * -1 + 0.5))",
       "eval": "frame"
    },
    "inputs": "track_1_output",
    "outputs": "track_1_adjusted"
},


    


    Note that I am using an expression to determine the volume dynamically. To simplify my tests, I'm constructing complexFilters on the command line to understand the mysteries of some of the lightly-documented ffmpeg filters. Here's a working CLI test of using the volume filter to fade the sources in and out according to a time-based expresssion :

    


    # working fade in/out
ffmpeg -i knox.mp3 -i static.mp3 -filter_complex \
"[0:a] \
    volume = 'min(1, max(0, 0.5 + 0.5 * cos(PI * t / 5)))': eval=frame \
    [a0]; \
[1:a] \
    volume = 'min(1, max(0, 0.5 - 0.5 * cos(PI * t / 5)))': eval=frame \
    [a1]; \
[a0][a1]
    amix = inputs=2: duration=shortest" \
-c:a libmp3lame -q:a 2 output.mp3


    


    Now I want to apply lowpass, highpass, aphaser, and/or aecho to the transitions using a time-based expression. For instance, looking at lowpasss, according to ffmpeg -filters it says that it has time-based support :

    


      T.. = Timeline support
 TSC lowpass           A->A       Apply a low-pass filter with 3dB point frequency.


    


    How can I similarly apply a time-based expression to lowpass, highpass, aphaser, or aecho ? I thought the mix option might be the key, but I couldn't construct a working example and couldn't find any examples.

    


    # applying a lowpass filter
ffmpeg -i knox.mp3 -filter_complex \
"[0:a] \
    lowpass=f=3000:mix='0.5 + 0.5 * cos(PI * t / 5)';" \
-c:a libmp3lame -q:a 2 output.mp3


    


    Or if you prefer the fluent-ffmpeg filter :

    


    const ffmpeg = require('fluent-ffmpeg');

ffmpeg('knox.mp3')
    .complexFilter([
        {
            filter: 'lowpass',
            options: {
                f: 3000,
                mix: '0.5 + 0.5 * cos(PI * t / 5)'
            },
            inputs: '[0:a]',
        }
    ])
    .audioCodec('libmp3lame')
    .audioQuality(2)
    .save('output.mp3')
    .on('end', () => {
        console.log('Processing finished successfully');
    })
    .on('error', (err) => {
        console.error('Error: ' + err.message);
    });


    


    With the resultant error :

    


    [Parsed_lowpass_1 @ 0x7f9a72005c00] [Eval @ 0x7ff7bec423d8] Undefined constant or missing '(' in 't/5)'
[Parsed_lowpass_1 @ 0x7f9a72005c00] Unable to parse option value "0.5 + 0.5 * cos(PI * t / 5)"
Error applying option 'mix' to filter 'lowpass': Invalid argument


    


    What complicated fffmpeg faerie magic is needed to make some of the filters other than volume controlled by a time-based expr ?