Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (13133)

  • How to use ffmpeg to overlay waveforms on xstack mosaics and specify specific audio for playback

    1er mai 2022, par kellib

    I would like to make a mosaic of multiple titled streams, 1) specifying which of the audio streams to play and 2) overlay waveforms at the bottom of each of the video tiles for the audio that they belong to.

    


    I'm successfully able to create the titled mosaic of streams with the code below.

    


    However :

    


      

    1. I'm having a hard time figuring out how to specify just one of the specific audio sources. I found amix, but I don't really want to mix them, I just want to specify audio [a0], or [a1], or [a2], etc.
    2. 


    


    and

    


      

    1. I'm having a hard time figuring out how to overlay the wave forms at the bottom of the video for each of the tiles. I struggled trying to figure out putting showwaves into the mix. Is it possible ?
    2. 


    


    I want each tile to look like this, but since these are rtmp streams, they need to play-out the matching waveforms dynamically with each stream. https://dragonquest64.blogspot.com/2020/01/ffmpeg-audio-waveform.html

    


    If someone could point me in the right direction, that would be great. I'm getting close, but I'm pretty new to all of this, and have already spent way more time than I should have, so would love a little help.

    


    ffmpeg \
-i rtmp://my.cdn.com/srcEncoders/STREAM-1 \
-i rtmp://my.cdn.com/srcEncoders/STREAM-2 \
-i rtmp://my.cdn.com/srcEncoders/STREAM-3 \
-i rtmp://my.cdn.com/srcEncoders/STREAM-4 \
  -filter_complex " \
      [0:v] setpts=PTS-STARTPTS, scale=qvga \
    , drawtext=text=STREAM-1:fontsize=20:x=10:y=10:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5 [a0]; \
      [1:v] setpts=PTS-STARTPTS, scale=qvga \
    , drawtext=text=STREAM-2:fontsize=20:x=10:y=10:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5 [a1]; \
      [2:v] setpts=PTS-STARTPTS, scale=qvga \
    , drawtext=text=STREAM-3:fontsize=20:x=10:y=10:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5 [a2]; \
      [3:v] setpts=PTS-STARTPTS, scale=qvga \ 
    , drawtext=text=STREAM-4:fontsize=20:x=10:y=10:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5 [a3]; \
      [a0][a1][a2][a3]xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0[out]; \
    amix=inputs=1
      " \
  -map "[out]" \
 -c:v libx264 -b:v 1000k -g 30 -keyint_min 120 -profile:v baseline -preset veryfast -f mpegts "udp://127.0.0.1:1234?pkt_size=1316"


    


  • How to generate a PDF (1.7) from a MP4 movie (Rich Media annotation) ?

    19 août 2020, par malat

    I am a happy user of img2pdf. This tool does the minimal amount of work to put a series of JPEG 2000/JPEG/PNG images into a PDF "enveloppe". However I am now faced with a new challenge : embed a MP4 file into a PDF "enveloppe".

    


    I see that commercial tool can do it, as seen at :

    


    


    It seems to have been introduced in ISO 32000-1 (PDF 1.7 Extension Level 5)

    


    I am looking for a solution which will use the Rich Media annotation inside the PDF stream.

    


    There are dozen of duplicated questions on superuser/stackoverflow, which all pretty much refer to imagemagick/convert command line tool. But in my case, convert expand the images into a multi-page PDF (which is not my desired behavior) :

    


    $ convert input.mp4 output.pdf
$ pdfinfo output.pdf 
Title:          out
Producer:       https://imagemagick.org
CreationDate:   Wed Aug 19 15:38:01 2020 CEST
ModDate:        Wed Aug 19 15:38:01 2020 CEST
Tagged:         no
UserProperties: no
Suspects:       no
Form:           none
JavaScript:     no
Pages:          1601
Encrypted:      no
Page size:      352 x 288 pts
Page rot:       0
File size:      534407296 bytes
Optimized:      no
PDF version:    1.3


    


    with :

    


    $ convert --version
Version: ImageMagick 6.9.10-23 Q16 x86_64 20190101 https://imagemagick.org
Copyright: © 1999-2019 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC Modules OpenMP 
Delegates (built-in): bzlib djvu fftw fontconfig freetype jbig jng jpeg lcms lqr ltdl lzma openexr pangocairo png tiff webp wmf x xml zlib


    


    and

    


    $ file input.mp4 
input.mp4: ISO Media, MP4 Base Media v1 [IS0 14496-12:2003]
$ ffprobe -v quiet -print_format json  -show_streams input.mp4 | grep codec_long_name
            "codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",


    


  • Zipping Conda Environment Breaks Audioread's Backend (Python/Pyspark)

    25 octobre 2017, par Tim

    I have previously build pyspark environments using conda to package all dependancies and ship them to all the nodes at runtime. Here’s how I create the environment :

    `conda/bin/conda create -p conda_env --copy -y python=2  \
    numpy scipy ffmpeg gcc libsndfile gstreamer pygobject audioread librosa`

    `zip -r conda_env.zip conda_env`

    Then sourcing conda_env and running pyspark shell I can successfully execute :

    `import librosa
    y, sr = librosa.load("test.m4a")`

    Note without the environment sourced this script results in an error as ffmpeg/gstreamer are NOT installed on my locally.

    Submitting a script to the cluster results in a librosa.load error which traces back to audioread indicating the backend (either gstreamer or ffmpeg) can no longer be found in the zipped archive environment. The stacktrace is below :

    Submit :

    `PYSPARK_PYTHON=./NODE/conda_env/bin/python spark-submit --verbose \
           --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./NODE/conda_env/bin/python \
           --conf spark.yarn.appMasterEnv.PYTHON_EGG_CACHE=/tmp \
           --conf spark.executorEnv.PYTHON_EGG_CACHE=/tmp \
           --conf spark.yarn.executor.memoryOverhead=1024 \
           --conf spark.hadoop.validateOutputSpecs=false \
           --conf spark.driver.cores=5 \
           --conf spark.driver.maxResultSize=0 \
           --master yarn --deploy-mode cluster --queue production \
           --num-executors 20 --executor-cores 5 --executor-memory 40G \
           --driver-memory 20G --archives conda_env.zip#NODE \
           --jars /data/environments/sqljdbc41.jar \
           script.py`

    Trace :

    `Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/worker.py", line 172, in main
       process()
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/worker.py", line 167, in process
       serializer.dump_stream(func(split_index, iterator), outfile)
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
       vs = list(itertools.islice(iterator, batch))
     File "script.py", line 245, in <lambda>
     File "script.py", line 119, in download_audio
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/NODE/conda_env/lib/python2.7/site-packages/librosa/core/audio.py", line 107, in load
       with audioread.audio_open(os.path.realpath(path)) as input_file:
     File "/mnt/yarn/usercache/user/appcache/application_1506634200253_39889/container_1506634200253_39889_01_000003/NODE/conda_env/lib/python2.7/site-packages/audioread/__init__.py", line 114, in audio_open
       raise NoBackendError()
    NoBackendError`
    </lambda>

    My question is : How can I package this archive so that librosa (really audioread) is able to find the backend and load .m4a files ?