Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (102)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (10311)

  • fate/spdif : Add spdif tests

    11 septembre 2022, par Andreas Rheinhardt
    fate/spdif : Add spdif tests
    

    These tests test both the demuxer as well as the muxer
    wherever possible. It is not always possible due to the fact
    that the muxer supports more codecs than the demuxer.

    The spdif demuxer does currently not set the need_parsing flag.
    If one were to set this to AVSTREAM_PARSE_FULL, the test results
    would change as follows :
    - For spdif-aac-remux, the packets are currently padded to 16bits,
    i.e. if the actual packet size is odd, there is a padding byte.
    The parser splits this byte away into a one byte packet of its own.
    Insanely, these one byte packets get the same duration as normal
    packets, i.e. timing is ruined.
    - The DCA-remux tests get proper duration/timestamps.
    - In the spdif-mp2-remux test the demuxer marks the stream as
    being MP2 ; the parser sets it to MP3 and this triggers
    the "Codec change in IEC 61937" codepath ; this test therefore
    returns only two packets with the parser.
    - For spdif-mp3-remux some bytes end up in different packets :
    Some input packets of this file have an odd length (417B instead
    of 418B like all the other packets) and are padded to 418B.
    Without a parser, all returned packets from the spdif-demuxer
    are 418B. With a parser, the packets that were originally 417B
    are 417B again, but the padding byte has not been discarded,
    but added to the next packet which is now 419B.
    This fixes "Multiple frames in a packet" warning and avoids
    an "Invalid data found when processing input" error when decoding.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] tests/Makefile
    • [DH] tests/fate/spdif.mak
    • [DH] tests/ref/fate/spdif-aac-remux
    • [DH] tests/ref/fate/spdif-ac3-remux
    • [DH] tests/ref/fate/spdif-dca-core-bswap
    • [DH] tests/ref/fate/spdif-dca-core-remux
    • [DH] tests/ref/fate/spdif-dca-master
    • [DH] tests/ref/fate/spdif-dca-master-core
    • [DH] tests/ref/fate/spdif-dca-master-core-remux
    • [DH] tests/ref/fate/spdif-eac3
    • [DH] tests/ref/fate/spdif-mlp
    • [DH] tests/ref/fate/spdif-mp2-remux
    • [DH] tests/ref/fate/spdif-mp3-remux
    • [DH] tests/ref/fate/spdif-truehd
  • How do I encode a video stream to multiple output formats in parallel with ffmpeg ?

    21 juillet 2022, par rgov

    I would like to use one FFmpeg process to receive video input and then pass that video to multiple separate encoder processes in order to efficiently make use of all available CPU cores.

    &#xA;

    The FFmpeg wiki article on Creating multiple outputs has this note from @rogerdpack :

    &#xA;

    &#xA;

    Outputting and re encoding multiple times in the same FFmpeg process will typically slow down to the "slowest encoder" in your list. Some encoders (like libx264) perform their encoding "threaded and in the background" so they will effectively allow for parallel encodings, however audio encoding may be serial and become the bottleneck, etc. It seems that if you do have any encodings that are serial, it will be treated as "real serial" by FFmpeg and thus your FFmpeg may not use all available cores. One work around to this is to use multiple ffmpeg instances running in parallel, or possible piping from one ffmpeg to another to "do the second encoding" etc. Or if you can avoid the limiting encoder (ex : using a different faster one [ex : raw format] or just doing a raw stream copy) that might help.

    &#xA;

    &#xA;

    The article has an example of using a tee pseudo-muxer, but it uses "a single instance of FFmpeg. The example of piping from one instance of FFmpeg to another only allows one encoder process.

    &#xA;

    A 10-year-old version of the same article mentions using the tee process but it was subsequently deleted :

    &#xA;

    &#xA;

    Another option is to output from FFmpeg to "-" then to pipe that to a "tee" command, which can send it to multiple other processes, for instance 2 different other ffmpeg processes for encoding (this may save time, as if you do different encodings, and do the encoding in 2 different simultaneous processes, it might do encoding more in parallel than elsewise). Un benchmarked, however.

    &#xA;

    &#xA;

    Along the same lines : Some of the example commands use the mpegts to encapsulate frames before passing them between processes. Is there any constraint that this applies to the codecs or types of metadata that can be sent to downstream processes ?

    &#xA;

  • How to capture movie with Gphoto2 + ffmpeg and redirect serve to html embed

    1er avril 2021, par Doglas Antonio Dembogurski Fei

    Iam trying to capture video from Panasonic DC-GH5 camera to serve this and access from browser withoud ffserver because ffserver is deprecated

    &#xA;

    Iam using Ubuntu 20.04

    &#xA;

    #gphoto2 -v&#xA;&#xA;&#xA;gphoto2         2.5.23         gcc, popt(m), exif, cdk, aa, jpeg, readline&#xA;libgphoto2      2.5.25         standard camlibs (SKIPPING lumix), gcc, ltdl, EXIF&#xA;libgphoto2_port 0.12.0         iolibs: disk ptpip serial usb1 usbdiskdirect usbscsi, gcc, ltdl, EXIF, USB, serial without locking&#xA;

    &#xA;

    Iam try this code

    &#xA;

    ffmpeg -f video4linux2 -s 640x480 -r 30 -i /dev/video0 -thread_queue_size 512 -ac 1 -f alsa -i pulse -f webm -listen 1 -seekable 0 -multiple_requests 1 http://localhost:8090&#xA;

    &#xA;

    and embed

    &#xA;

    <video src="http://localhost:8090"></video>&#xA;

    &#xA;

    in index.php but don`t appear anything.&#xA;If anyone knows a way to make a server for a specific port I would appreciate it&#xA;Thank you.

    &#xA;