Recherche avancée

Médias (91)

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (13149)

  • Docker fails to import FFMPEG or does not find it

    8 décembre 2022, par stxss

    So I'm trying to create a telegram bot using python and ffmpeg and I want to deploy it to a server through a docker image.

    


    I already looked through a lot of resources as I've been looking at the same 5 lines of code for the past three days and neither discords, stack overflow previous answers have helped me.

    


    This is my dockerfile where half of the program works (apart from the ffmpeg functionality).

    


    FROM python:3.9

RUN mkdir /app
WORKDIR /app

COPY requirements.txt ./
RUN pip3 install --no-cache-dir --user -r requirements.txt

COPY . .

ENTRYPOINT ["/usr/bin/python3", "./app.py" ]


    


    With this code I get the following error when trying to use a function that invokes ffmpeg functionality.

    


    Traceback (most recent call last):
File "/root/.local/lib/python3.9/site-packages/pyrogram/dispatcher.py", line 240, in handler_worker
await handler.callback(self.client, *args)
File "/app/./app.py", line 274, in choice_from_inline
await helpers.trim_file(trim_length, "audio", chat_id_for_join.strip())
File "/app/helpers.py", line 53, in trim_file
output = ffmpeg.output(
File "/root/.local/lib/python3.9/site-packages/ffmpeg/_run.py", line 313, in run
process = run_async(
File "/root/.local/lib/python3.9/site-packages/ffmpeg/_run.py", line 284, in run_async
return subprocess.Popen(
File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.9/subprocess.py", line 1823, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'


    


    Another issue is that when changing the ENTRYPOINT (or using CMD) to ["python3", "./app.py" ] everything works well locally but as soon as I try to deploy, the deployment/docker container just doesn't work or crash because I get the error of ModuleNotFoundError: No module named 'ffmpeg'.

    


    I have already tried setting up different ENV PATH and ENV PYTHONPATH, does absolutely nothing.
I have tried to use COPY --from=jrottenberg/ffmpeg /usr/local ./ and it also doesn't work.
I have tried to explicitly use RUN apt-get install -y ffmpeg and similar commands, pip, pip3, etc.. it just doesn't work.

    


    I tried to use the copy command to access the /root/.local/lib/python3.9/site-packages/ffmpeg/_run.py but I either get a permission denied or a file does not exist error.

    


    I am also using the ffmpeg-python wrapper and am using a windows machine if that's of any importance.

    


    At this point I'm contemplating of finding another way of implementing the functionality I want without using ffmpeg.

    


    I think I added everything I had, if needed more I can provide.

    


  • ffmpeg : combining/ordering vidstab and crop filters

    30 juillet 2016, par ljwobker

    I have a workflow which essentially takes a raw video file, crops away portions of the frame that aren’t relevant, then performs a two-pass deshake filter using the vidstab filter. At the moment I’m running this as three distinct commands : one to do the crop, a second to do the vidstab "detect" pass, and a third to do the vidstab "transform" pass.

    My working script :

    # do the crop first and strip the audio
    nice -20 ffmpeg -hide_banner -ss $SEEK -i $INFILE -t $DURATION -preset veryfast -crf 12 -vf crop=0.60*in_w:in_h/9*8:0.22*in_w:0 -an -y $TEMP

    # now run the vidstab detection pass
    nice -20 ffmpeg -hide_banner -i $TEMP -vf vidstabdetect=stepsize=6:shakiness=10:accuracy=15:result=${INFILE}.trf -f null -

    # now the vidstab transform, with unsharp and writing the overlay text
    nice -20 ffmpeg -hide_banner -i $TEMP -preset veryfast -crf 22 -vf \
    " \
    vidstabtransform=input=${INFILE}.trf:zoom=2:smoothing=60,
    unsharp=5:5:0.8:3:3:0.4,
    drawtext=fontfile=/Windows/Fonts/arialbd.ttf:text=$DIVE:enable='between(t,0,65)':fontcolor=black:fontsize=72:x=w*0.01:y=h*0.01,  
    null"\
     -y $OUTFILE

    What I can’t seem to figure out is how I can combine the first two filter passes into a single chain, which (at least in theory) would be a faster encode time, and at the very least would be simpler to maintain and would eliminate a pass of the encoder. What I tried to do was the second code block, which just builds a filterchain that combines the initial crop with the vidstab detection filter.

    # this is a combined filter for the crop and the vidstab detect
    nice -20 ffmpeg -hide_banner -ss $SEEK -i $INFILE -t $DURATION -preset veryfast -crf 12 -vf \
    " \
    crop=0.60*in_w:in_h/9*8:0.22*in_w:0,
    vidstabdetect=stepsize=6:shakiness=10:accuracy=15:result=${INFILE}.trf,
    null " \
    -an -r 30 -y $TEMP


    # now run the transcoding and the vidstab transform
    nice -20 ffmpeg -hide_banner -i $TEMP -preset veryfast -crf 22 -vf \
    " \

    vidstabtransform=input=${INFILE}.trf:zoom=2:smoothing=60,
    unsharp=5:5:0.8:3:3:0.4,
    drawtext=fontfile=/Windows/Fonts/arialbd.ttf:text=$DIVE:enable='between(t,0,65)':fontcolor=black:fontsize=72:x=w*0.01:y=h*0.01,  

    null"\
     -y $OUTFILE

    However, when I run this (and it runs) the final output video has most definitely NOT been effectively stabilized. The logs show that both the detect and the transform passes have been processed, it’s just that the output isn’t right.

  • I faced ffmpeg error in my project run time

    3 juillet 2023, par Jesy J
    Runtime error: can't load audio from file: 'ffmpeg' not found. Please install 
'ffmpeg' in your system to use non- wav audio file format and make sure 'ffprobe' 
is in your path


    


    I configure ffmpeg in my system but still I face this error.

    


    This is my code :

    


    !pip install gradio
!pip install SpeechRecognition
!pip install pydub
!pip install openai

import gradio as gr
import speech_recognition as sr
from pydub import AudioSegment
import openai

# Set up OpenAI API
openai.api_key = [MASKED]

# Function to convert text to speech using OpenAI's API
def text_to_speech(text, language):
    response = openai.Completion.create(
        engine="davinci",
        prompt=f"Translate the following English text into {language}: \"{text}\"",
        max_tokens=100,
        temperature=0.8,
        top_p=1.0,
        frequency_penalty=0.0,
        presence_penalty=0.0,
        stop=None,
        n=1,
        log_level="info"
    )
    return response.choices[0].text.strip()

# Function to recognize speech from audio
def speech_to_text(audio):
    recognizer = sr.Recognizer()
    with sr.AudioFile(audio) as source:
        audio_data = recognizer.record(source)
    return recognizer.recognize_google(audio_data)

# Function to convert audio to desired language
def convert_language(audio, target_language):
    recognized_text = speech_to_text(audio)
    translated_text = text_to_speech(recognized_text, target_language)
    return translated_text

# Function to process user input and generate output
def process_audio(input_audio, target_language):
    converted_text = convert_language(input_audio.name, target_language)
    return gr.outputs.Audio(converted_text, type="filepath")

# Set up Gradio interface
audio_input = gr.inputs.Audio(source="microphone")

language_input = gr.inputs.Dropdown(choices=["English", "French", "German"])  # Add more languages as needed

output_audio = gr.outputs.Audio(type="filepath", label="Output Audio")

title = "Multilingual AI Voice Assistant"

description = "Upload an audio file and select the target language for translation."

gr.Interface(fn=process_audio, inputs=[audio_input, language_input], outputs=output_audio, title=title, description=description).launch()