
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (46)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (8472)
-
Video generation and ffmpeg and locally stored images [closed]
3 juillet, par Rahul PatilI am facing issue in ffmpeg while running the below code


I'm working on a Flask application that generates a video by combining a sequence of images from a folder and a synthesized audio track using Bark (suno/bark-small). The idea is to use FFmpeg to stitch the images into a video, apply padding and scaling, and then merge it with the generated audio. I'm triggering the /generate-video endpoint with a simple curl POST request, passing a script that gets converted to audio. While the image and audio processing work as expected, FFmpeg fails during execution, and the server returns a 500 error. I've added error logging to capture FFmpeg’s stderr output, which suggests something is going wrong either with the generated input.txt file or the format of the inputs passed to FFmpeg. I'm not sure if the issue is related to file paths, the concat demuxer formatting, or possibly audio/video duration mismatch. Any insights on how to debug or correct the FFmpeg command would be appreciated.


the curl request is


curl -X POST http://localhost:5000/generate-video \
 -H "Content-Type: application/json" \
 -d '{"script": "Hello, this is a test script to generate a video."}' \
 --output output_video.mp4



import os
import uuid
import subprocess
from pathlib import Path
import numpy as np
from flask import Flask, request, jsonify, send_file
from transformers import AutoProcessor, AutoModelForTextToWaveform
from scipy.io.wavfile import write as write_wav
import torch

# ========== CONFIG ==========
IMAGE_FOLDER = "./images"
OUTPUT_FOLDER = "./output"
RESOLUTION = (1280, 720)
IMAGE_DURATION = 3 # seconds per image
SAMPLE_RATE = 24000

app = Flask(__name__)
os.makedirs(OUTPUT_FOLDER, exist_ok=True)

# Load Bark-small model and processor
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained("suno/bark-small")
model = AutoModelForTextToWaveform.from_pretrained("suno/bark-small").to(device)


# ========== UTILS ==========
def run_ffmpeg(cmd):
 result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 if result.returncode != 0:
 print("[FFmpeg ERROR]\n", result.stderr.decode())
 raise RuntimeError("FFmpeg failed.")
 else:
 print("[FFmpeg] Success.")


def find_images(folder):
 return sorted([
 f for f in Path(folder).glob("*")
 if f.suffix.lower() in {".jpg", ".jpeg", ".png"}
 ])


def create_ffmpeg_input_list(images, list_file_path):
 with open(list_file_path, "w") as f:
 for img in images:
 f.write(f"file '{img.resolve()}'\n")
 f.write(f"duration {IMAGE_DURATION}\n")
 # Repeat last image to avoid cutoff
 f.write(f"file '{images[-1].resolve()}'\n")


# ========== FLASK ROUTE ==========
@app.route('/generate-video', methods=['POST'])
def generate_video():
 data = request.get_json()
 script = data.get("script")
 if not script:
 return jsonify({"error": "No script provided"}), 400

 images = find_images(IMAGE_FOLDER)
 if not images:
 return jsonify({"error": "No images found in ./images"}), 400

 # Generate audio
 print("[1/3] Generating audio with Bark...")
 inputs = processor(script, return_tensors="pt").to(device)
 with torch.no_grad():
 audio_values = model.generate(**inputs)

 audio_np = audio_values[0].cpu().numpy().squeeze()
 audio_np = np.clip(audio_np, -1.0, 1.0)
 audio_int16 = (audio_np * 32767).astype(np.int16)

 audio_path = os.path.join(OUTPUT_FOLDER, f"{uuid.uuid4()}.wav")
 write_wav(audio_path, SAMPLE_RATE, audio_int16)

 # Create FFmpeg concat file
 print("[2/3] Preparing image list for FFmpeg...")
 list_file = os.path.join(OUTPUT_FOLDER, "input.txt")
 create_ffmpeg_input_list(images, list_file)

 # Final video path
 final_video_path = os.path.join(OUTPUT_FOLDER, f"{uuid.uuid4()}.mp4")

 # Run FFmpeg
 print("[3/3] Running FFmpeg to create video...")
 ffmpeg_cmd = [
 "ffmpeg", "-y",
 "-f", "concat", "-safe", "0", "-i", list_file,
 "-i", audio_path,
 "-vf", f"scale={RESOLUTION[0]}:{RESOLUTION[1]}:force_original_aspect_ratio=decrease,"
 f"pad={RESOLUTION[0]}:{RESOLUTION[1]}:(ow-iw)/2:(oh-ih)/2:color=black",
 "-c:v", "libx264", "-pix_fmt", "yuv420p",
 "-c:a", "aac", "-b:a", "192k",
 "-shortest", "-movflags", "+faststart",
 final_video_path
 ]

 try:
 run_ffmpeg(ffmpeg_cmd)
 except RuntimeError:
 return jsonify({"error": "FFmpeg failed. Check server logs."}), 500

 return send_file(final_video_path, as_attachment=True)


# ========== RUN APP ==========
if __name__ == '__main__':
 app.run(debug=True)



-
ffmpeg video to rtsp not working, ffmpeg is not reading the video frame by frame [closed]
15 janvier, par Leroy JeslynHere's the ffmpeg command :


gst-launch-1.0 videotestsrc ! decodebin ! videoconvert ! videoscale \
 ! video/x-raw,width=1280,height=720 ! x264enc speed-preset=ultrafast \
 tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000 sync=false \
 ! rtspserver service=0/test



The output was :


ffmpeg version 7.1-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers 
 built with gcc 14.2.0 (Rev1, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-
libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth 
--enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libqui
rc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 
--enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable
-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-
cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-lib
shaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enab
le-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-liblc3 --e
nable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 59. 39.100 / 59. 39.100
 libavcodec 61. 19.100 / 61. 19.100
 libavformat 61. 7.100 / 61. 7.100
 libavdevice 61. 3.100 / 61. 3.100
 libavfilter 10. 4.100 / 10. 4.100
 libswscale 8. 3.100 / 8. 3.100
 libswresample 5. 3.100 / 5. 3.100
 libpostproc 58. 3.100 / 58. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Render/fire3.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp41isom
 creation_time : 2025-01-08T13:10:15.000000Z
 Duration: 00:01:21.79, start: 0.000000, bitrate: 12799 kb/s
 Stream #0:0[0x1](und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 12798 kb/s, 14 fps, 14 tbr, 14k tbn (default)
 Metadata:
 creation_time : 2025-01-08T13:10:15.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
[out#0/rtsp @ 00000178486d9a00] Codec AVOption b:a (set bitrate (in bits/s)) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some decoder which was not actually used for any stream.
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000017848d200c0] using SAR=1/1
[libx264 @ 0000017848d200c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000017848d200c0] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0000017848d200c0] 264 - core 164 r3192 c24e06c - H.264/MPEG-4 AVC codec - Copyleft 2003-2024 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 d
eblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_q
p_offset=-2 threads=22 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 
direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=14 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=1000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00




This is were the output stopped, instead of reading the video frame by frame, it stops here and runs indefinitely. I tried to use VLC and python script to read the rtsp url, but that didn't work.
I tried the alternative using GStreamer, but i couldn't install rtsp-server on Windows 11.


Thank you for your time, any suggestion or answers are appreciated. The end goal is to convert a video to a rtsp url.


-
ffmpeg_kit_flutter_full : Video created from ffmpeg not showing in the web
16 décembre 2024, par Đoàn Thanh ThảoI am implementing a screen capture widget to create a video. The video was successfully created in MP4 format and I can view it in my app and then I posted it to my web. However, on the web, I cannot see the video preview, only when I download the video and view it on my phone can I see it. I think the problem is that there is some missing format that makes the video not playable on Https. Here is my code :


final command = '-y -f image2 -i $imagesPath '
 '${fps != null ? "-r $fps" : ""} '
 '-vf "scale=2000:5000" '
 // '-vf "scale=iw*$scaleMultiplier:ih*$scaleMultiplier" '
 '-c:v mpeg4 -q:v $qValue -pix_fmt yuv420p -movflags +faststart $outputVideoPath';



I try to use
libx264
nhưng bị báo lỗi