Recherche avancée

Médias (91)

Autres articles (108)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (38365)

  • Capturing PCM audio data stream into file, and playing stream via ffmpeg, how ?

    11 avril 2015, par icarus74

    Would like to do following four things (separately), and need a bit of help understanding how to approach this,

    1. Dump audio data (from a serial-over-USB port), encoded as PCM, 16-bit, 8kHz, little-endian, into a file (plain binary data dump, not into any container format). Can this approach be used :

      $ cat /dev/ttyUSB0 > somefile.dat

    Can I do a ^C to close the file writing, while the dumping is in progress, as per the above command ?

    1. Stream audio data (same as above described kind), directly into ffmpeg for it to play out ? Like this :

      $ cat /dev/ttyUSB0 | ffmpeg

    or, do I have to specify the device port as a "-source" ? If so, I couldn’t figure out the format.

    Note that, I’ve tried this,

    $ cat /dev/urandom | aplay

    which works as expected, by playing out white-noise..., but trying the following doesn’t help :

    $ cat /dev/ttyUSB1 | aplay -f S16_LE

    Even though, opening /dev/ttyUSB1 using picocom @ 115200bps, 8-bit, no parity, I do see gibbrish, indicating presence of audio data, exactly when I expect.

    1. Use the audio data dumped into the file, use as a source in ffmpeg ? If so how, because so far I get the impression that ffmpeg can read a file in standard containers.

    2. Use pre-recorded audio captured in any format (perhaps .mp3 or .wav) to be streamed by ffmpeg, into /dev/ttyUSB0 device. Should I be using this as a "-sink" parameter, or pipe into it or redirect into it ? Also, is it possible that in 2 terminal windows, I use ffmpeg to capture and transmit audio data from/into same device /dev/ttyUSB0, simultaneously ?

    My knowledge of digital audio recording/processing formats, codecs is somewhat limited, so not sure if what I am trying to do qualifies as working with ’raw’ audio or not ?

    If ffmpeg is unable to do what I am hoping to achieve, could gstreamer be the solution ?

    PS> If anyone thinks that the answer could be improved, please feel free to suggest specific points. Would be happy to add any detail requested, provided I have the information.

  • Create video with size based on image and place a video somewhere with an offset

    10 mars 2024, par NoKey

    I am trying out FFMPEG and I am unsure how hard it is to do what I want. I have some device frames and I want to play a video inside the frame. For example, this is a device frame :

    


    enter image description here

    


    Now I want to play a video within the screen of the iPhone. I already got the exact X and Y offset where the video must be placed to show it correctly. I have the following challenges to make it work, and I want to make sure FFMPEG can do it before I spend to much time reinventing the wheel :

    


      

    • The output of the video must be as big as the PNG. This is already a
confusing part for me. I have the width and height already available,
but the things I saw is that FFMPEG will take over the input of the
video as final size. The final output of the video should of course
be the length of the input video.

      


    • 


    • The background must be transparant (so no black background, I want to
play the video on top of a website so it's nice if it's transparant and the corners are not black).

      


    • 


    • The ability to place a video somewhere with a specified X and Y
offset inside the device frame.

      


    • 


    • Not sure if it's possible in the same command, but maybe the video
needs to be resized to make it fit. I got the exact dimensions for
the video.

      


    • 


    


    The things I struggle most is point 1 where the output video must have a transparant background and where the device frame is placed in. Does anybody got tips ?

    


  • does the ffmpeg overlay filter support a fade option ?

    2 décembre 2016, par Stefan

    I need several overlaid image files to fade in/out at different time intervals on a video file using ffmpeg.

    I am aware of the technique some people use to achieve the effect i’m looking for, which looks something like this :

    ffmpeg.exe -i in_video.mov -loop 1 -i image.png
    -filter_complex "
    [1:v]fade=in:st=2:d=0.5:alpha=1,fade=out:st=4:d=0.5:alpha=1[t0];
    [1:v]fade=in:st=8.6:d=0.5:alpha=1,fade=out:st=12.6:d=0.5:alpha=1[t1];
    [1:v]fade=in:st=12.2:d=0.5:alpha=1,fade=out:st=14.2:d=0.5:alpha=1[t2];
    [0:v][t0]overlay=shortest=1[tmp0];
    [tmp0][t1]overlay=shortest=1[tmp1];
    [tmp1][t2]overlay=shortest=1[tmp2]"
    -map "[tmp2]" out_video.mov

    But i experience linear performance decreases as I tack on more overlays.

    By leveraging the ’enable’ overlay option like so I can achieve great performance but lose the ability to fade :

    ffmpeg.exe
    -i in_video.mov
    -i image.png
    -filter_complex "
    [0:v][t0] overlay=enable='between(t,2,4.5)' [tmp0];
    [tmp0][t0] overlay=enable='between(t,8.6,13.1)' [tmp1];
    [tmp1][t0] overlay=enable='between(t,17.3,21.5)' [tmp2]"
    -map "[tmp2]" output_video.mov

    Can i tack on a ’fade’ option along with the ’enable’ option to optimize performance ? Or should i just attempt to contribute a new option for the overlay filter to the open source ?