Recherche avancée

Médias (91)

Autres articles (72)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (10323)

  • How to capture movie with Gphoto2 + ffmpeg and redirect serve to html embed

    1er avril 2021, par Doglas Antonio Dembogurski Fei

    Iam trying to capture video from Panasonic DC-GH5 camera to serve this and access from browser withoud ffserver because ffserver is deprecated

    


    Iam using Ubuntu 20.04

    


    #gphoto2 -v


gphoto2         2.5.23         gcc, popt(m), exif, cdk, aa, jpeg, readline
libgphoto2      2.5.25         standard camlibs (SKIPPING lumix), gcc, ltdl, EXIF
libgphoto2_port 0.12.0         iolibs: disk ptpip serial usb1 usbdiskdirect usbscsi, gcc, ltdl, EXIF, USB, serial without locking


    


    Iam try this code

    


    ffmpeg -f video4linux2 -s 640x480 -r 30 -i /dev/video0 -thread_queue_size 512 -ac 1 -f alsa -i pulse -f webm -listen 1 -seekable 0 -multiple_requests 1 http://localhost:8090


    


    and embed

    


    <video src="http://localhost:8090"></video>&#xA;

    &#xA;

    in index.php but don`t appear anything.&#xA;If anyone knows a way to make a server for a specific port I would appreciate it&#xA;Thank you.

    &#xA;

  • How to decode and display real-time H264 stream using ffmpeg in Python ?

    25 mars 2022, par yiiiiiiiran

    I would like to port the live stream to ffmpeg and display it in real time using Python.

    &#xA;

    Anyone knows how to port the stream to PIPE ? And in the mean time to display it after decoding ?

    &#xA;

    I managed to get real-time stream from my Raspberry Pi3 to Windows PC, using RS232 connection with Baud Rate 2M.

    &#xA;

    The format of the stream is in H264. The data package I get for each frame is in .&#xA;In order for the program to know when does each package ends, I've add

    &#xA;

    bytes([0xcc,0xdd,0xee,0xff])&#xA;

    &#xA;

    to the end of package. So that my serial port will read for a package until it sees those bytes.

    &#xA;

    Lets assume the stream WIDTH, HEIGHT, NUM_FRAMES, FPS = 320, 240, 90, 30

    &#xA;

    I have the command for decode the h264 stream :

    &#xA;

    cmd = ["C:/XXXXXX/ffmpeg.exe",&#xA;    "-probesize", "32",&#xA;    "-flags", "low_delay",&#xA;    "-f", "h264",&#xA;    "-i", "pipe:",&#xA;    "-f", "rawvideo", "-pix_fmt", "rgb24", "-s", "384x216",&#xA;    "pipe:"]&#xA;&#xA;decode_process = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE)&#xA;

    &#xA;

    The stream package I got is

    &#xA;

    while datetime.now() &lt; end_time:&#xA;    pkg = ser.read_until(expected=bytes([0xcc,0xdd,0xee,0xff])) #output <class>&#xA;    frame_len = len(pkg)-4&#xA;    frame_inBytes = pkg[0:frame_len]&#xA;    decode_process.stdin.write(frame_inBytes)&#xA;</class>

    &#xA;

    I want to write the real time stream to PIPE however it shows error :

    &#xA;

    [h264 @ 0000017322a3e980] missing picture in access unit with size 48&#xA;[h264 @ 0000017322a3e980] no frame!&#xA;[h264 @ 0000017322a2d240] Stream #0: not enough frames to estimate rate; consider increasing probesize&#xA;[h264 @ 0000017322a2d240] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size&#xA;Consider increasing the value for the &#x27;analyzeduration&#x27; (0) and &#x27;probesize&#x27; (32) options        &#xA;Input #0, h264, from &#x27;pipe:&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))&#xA;[h264 @ 0000017322a3f180] no frame!&#xA;Error while decoding stream #0:0: Invalid data found when processing input&#xA;Cannot determine format of input stream 0:0 after EOF&#xA;Error marking filters as finished&#xA;Conversion failed!&#xA;

    &#xA;

  • How do I encode a video stream to multiple output formats in parallel with ffmpeg ?

    21 juillet 2022, par rgov

    I would like to use one FFmpeg process to receive video input and then pass that video to multiple separate encoder processes in order to efficiently make use of all available CPU cores.

    &#xA;

    The FFmpeg wiki article on Creating multiple outputs has this note from @rogerdpack :

    &#xA;

    &#xA;

    Outputting and re encoding multiple times in the same FFmpeg process will typically slow down to the "slowest encoder" in your list. Some encoders (like libx264) perform their encoding "threaded and in the background" so they will effectively allow for parallel encodings, however audio encoding may be serial and become the bottleneck, etc. It seems that if you do have any encodings that are serial, it will be treated as "real serial" by FFmpeg and thus your FFmpeg may not use all available cores. One work around to this is to use multiple ffmpeg instances running in parallel, or possible piping from one ffmpeg to another to "do the second encoding" etc. Or if you can avoid the limiting encoder (ex : using a different faster one [ex : raw format] or just doing a raw stream copy) that might help.

    &#xA;

    &#xA;

    The article has an example of using a tee pseudo-muxer, but it uses "a single instance of FFmpeg. The example of piping from one instance of FFmpeg to another only allows one encoder process.

    &#xA;

    A 10-year-old version of the same article mentions using the tee process but it was subsequently deleted :

    &#xA;

    &#xA;

    Another option is to output from FFmpeg to "-" then to pipe that to a "tee" command, which can send it to multiple other processes, for instance 2 different other ffmpeg processes for encoding (this may save time, as if you do different encodings, and do the encoding in 2 different simultaneous processes, it might do encoding more in parallel than elsewise). Un benchmarked, however.

    &#xA;

    &#xA;

    Along the same lines : Some of the example commands use the mpegts to encapsulate frames before passing them between processes. Is there any constraint that this applies to the codecs or types of metadata that can be sent to downstream processes ?

    &#xA;