
Recherche avancée
Médias (9)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (110)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (14112)
-
Video generation and ffmpeg and locally stored images [closed]
3 juillet, par Rahul PatilI am facing issue in ffmpeg while running the below code


I'm working on a Flask application that generates a video by combining a sequence of images from a folder and a synthesized audio track using Bark (suno/bark-small). The idea is to use FFmpeg to stitch the images into a video, apply padding and scaling, and then merge it with the generated audio. I'm triggering the /generate-video endpoint with a simple curl POST request, passing a script that gets converted to audio. While the image and audio processing work as expected, FFmpeg fails during execution, and the server returns a 500 error. I've added error logging to capture FFmpeg’s stderr output, which suggests something is going wrong either with the generated input.txt file or the format of the inputs passed to FFmpeg. I'm not sure if the issue is related to file paths, the concat demuxer formatting, or possibly audio/video duration mismatch. Any insights on how to debug or correct the FFmpeg command would be appreciated.


the curl request is


curl -X POST http://localhost:5000/generate-video \
 -H "Content-Type: application/json" \
 -d '{"script": "Hello, this is a test script to generate a video."}' \
 --output output_video.mp4



import os
import uuid
import subprocess
from pathlib import Path
import numpy as np
from flask import Flask, request, jsonify, send_file
from transformers import AutoProcessor, AutoModelForTextToWaveform
from scipy.io.wavfile import write as write_wav
import torch

# ========== CONFIG ==========
IMAGE_FOLDER = "./images"
OUTPUT_FOLDER = "./output"
RESOLUTION = (1280, 720)
IMAGE_DURATION = 3 # seconds per image
SAMPLE_RATE = 24000

app = Flask(__name__)
os.makedirs(OUTPUT_FOLDER, exist_ok=True)

# Load Bark-small model and processor
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained("suno/bark-small")
model = AutoModelForTextToWaveform.from_pretrained("suno/bark-small").to(device)


# ========== UTILS ==========
def run_ffmpeg(cmd):
 result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 if result.returncode != 0:
 print("[FFmpeg ERROR]\n", result.stderr.decode())
 raise RuntimeError("FFmpeg failed.")
 else:
 print("[FFmpeg] Success.")


def find_images(folder):
 return sorted([
 f for f in Path(folder).glob("*")
 if f.suffix.lower() in {".jpg", ".jpeg", ".png"}
 ])


def create_ffmpeg_input_list(images, list_file_path):
 with open(list_file_path, "w") as f:
 for img in images:
 f.write(f"file '{img.resolve()}'\n")
 f.write(f"duration {IMAGE_DURATION}\n")
 # Repeat last image to avoid cutoff
 f.write(f"file '{images[-1].resolve()}'\n")


# ========== FLASK ROUTE ==========
@app.route('/generate-video', methods=['POST'])
def generate_video():
 data = request.get_json()
 script = data.get("script")
 if not script:
 return jsonify({"error": "No script provided"}), 400

 images = find_images(IMAGE_FOLDER)
 if not images:
 return jsonify({"error": "No images found in ./images"}), 400

 # Generate audio
 print("[1/3] Generating audio with Bark...")
 inputs = processor(script, return_tensors="pt").to(device)
 with torch.no_grad():
 audio_values = model.generate(**inputs)

 audio_np = audio_values[0].cpu().numpy().squeeze()
 audio_np = np.clip(audio_np, -1.0, 1.0)
 audio_int16 = (audio_np * 32767).astype(np.int16)

 audio_path = os.path.join(OUTPUT_FOLDER, f"{uuid.uuid4()}.wav")
 write_wav(audio_path, SAMPLE_RATE, audio_int16)

 # Create FFmpeg concat file
 print("[2/3] Preparing image list for FFmpeg...")
 list_file = os.path.join(OUTPUT_FOLDER, "input.txt")
 create_ffmpeg_input_list(images, list_file)

 # Final video path
 final_video_path = os.path.join(OUTPUT_FOLDER, f"{uuid.uuid4()}.mp4")

 # Run FFmpeg
 print("[3/3] Running FFmpeg to create video...")
 ffmpeg_cmd = [
 "ffmpeg", "-y",
 "-f", "concat", "-safe", "0", "-i", list_file,
 "-i", audio_path,
 "-vf", f"scale={RESOLUTION[0]}:{RESOLUTION[1]}:force_original_aspect_ratio=decrease,"
 f"pad={RESOLUTION[0]}:{RESOLUTION[1]}:(ow-iw)/2:(oh-ih)/2:color=black",
 "-c:v", "libx264", "-pix_fmt", "yuv420p",
 "-c:a", "aac", "-b:a", "192k",
 "-shortest", "-movflags", "+faststart",
 final_video_path
 ]

 try:
 run_ffmpeg(ffmpeg_cmd)
 except RuntimeError:
 return jsonify({"error": "FFmpeg failed. Check server logs."}), 500

 return send_file(final_video_path, as_attachment=True)


# ========== RUN APP ==========
if __name__ == '__main__':
 app.run(debug=True)



-
Evolution #3822 : ventiler les CSS concernant les formulaires de spip.css -> form.css
14 août 2017, par tetue tetueVoir aussi : #3964
-
cbs : Describe allocate/free methods in tabular form
27 juillet 2020, par Mark Thompsoncbs : Describe allocate/free methods in tabular form
Unit types are split into three categories, depending on how their
content is managed :
* POD structure - these require no special treatment.
* Structure containing references to refcounted buffers - these can use
a common free function when the offsets of all the internal references
are known.
* More complex structures - these still require ad-hoc treatment.For each codec we can then maintain a table of descriptors for each set of
equivalent unit types, defining the mechanism needed to allocate/free that
unit content. This is not required to be used immediately - a new alloc
function supports this, but does not replace the old one which works without
referring to these tables.