Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (104)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (10391)

  • FFmpeg video has an excessively long duration

    8 juillet 2023, par daraem

    When using ytdl-core and ffmpeg-static to download high quality youtube videos, the output video is supposedly thousands of hours long, which makes it not let me advance the video in a media player. The error only occurs in windows 10 players. In VLC or Discord it does not happen.

    


            res.header("Content-Disposition", `attachment;  filename=video.mp4`)

        let video = ytdl(link, {
            filter: 'videoonly'
        })
        let audio = ytdl(link, {
            filter: 'audioonly',
            highWaterMark: 1 << 25
        });
        const ffmpegProcess = cp.spawn(ffmpeg, [
            '-i', `pipe:3`,
            '-i', `pipe:4`,
            '-map', '1:0',
            '-map', '0:0',
            '-vcodec', 'libx264',
            '-c:v', 'libx264',
            '-c:a', 'aac',
            '-crf', '27',
            '-preset', 'veryslow',
            '-b:v', '1500k',
            '-b:a', '128k',
            '-movflags', 'frag_keyframe+empty_moov',
            '-f', 'mp4',
            '-loglevel', 'error',
            '-',
        ], {
            stdio: [
                'pipe', 'pipe', 'pipe', 'pipe', 'pipe',
            ],
        });
    
        video.pipe(ffmpegProcess.stdio[3]);
        audio.pipe(ffmpegProcess.stdio[4]);
        ffmpegProcess.stdio[1].pipe(res);
    
        let ffmpegLogs = ''
    
        ffmpegProcess.stdio[2].on(
            'data',
            (chunk) => {
                ffmpegLogs += chunk.toString()
            }
        )
    
        ffmpegProcess.on(
            'exit',
            (exitCode) => {
                if (exitCode === 1) {
                    console.error(ffmpegLogs)
                }
            }
        )


    


    I've tried changing the codecs options. But I'm not sure what I'm doing

    


  • Failed to add audio to mp4 with moviepy

    24 avril, par D G

    I generated an audio sound.wav and a video temp.mp4 that makes use of the audio. Both with the same duration.

    


    I got the following warning when generating the temp.mp4. The animation is out of sync, it means that it freezes before the audio finishes.

    


    


    .venv\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py:178 : UserWarning : In file temp.mp4, 1080000 bytes wanted but 0
bytes read at frame index 299 (out of a total 300 frames), at time 4.98/5.00 sec. Using the last valid frame instead.
warnings.warn(

    


    


    Complete code :

    


    # to generate sound.wav
import numpy as np
import soundfile as sf
from tqdm import tqdm
from os import startfile

# Parameters
filename = "sound.wav"
duration = 5  # seconds
num_voices = 1000
sample_rate = 44100  # Hz
chunk_size = sample_rate  # write in 1-second chunks


t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)

# Create many detuned sine waves (Deep Note style)
start_freqs = np.random.uniform(100, 400, num_voices)
end_freqs = np.linspace(400, 800, num_voices)  # target harmony

# Each voice sweeps from start to end frequency
signal = np.zeros_like(t)
for i in range(num_voices):
    freqs = np.linspace(start_freqs[i], end_freqs[i], t.size)
    voice = np.sin(2 * np.pi * freqs * t)
    voice *= np.sin(np.pi * i / num_voices)  # slight variance
    signal += voice

# Volume envelope
envelope = np.linspace(0.01, 1.0, t.size)
signal *= envelope

# Normalize
signal /= np.max(np.abs(signal))

# Save with progress bar using soundfile
with sf.SoundFile(
    filename, "w", samplerate=sample_rate, channels=1, subtype="PCM_16"
) as f:
    for i in tqdm(range(0, len(signal), chunk_size), desc=f"Saving {filename}"):
        f.write(signal[i : i + chunk_size])


startfile(filename)


    


    # to generate temp.mp4
from numpy import pi, sin, cos
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from os import startfile
from tqdm import tqdm
from moviepy import VideoFileClip, AudioFileClip, CompositeAudioClip

# Output settings
filename = "temp.mp4"
duration = 5  # seconds of animation
maxdim = 4  # canvas size in plot units (scaled)
fps = 60

# Real-world parameters
r = 40  # km (radius)
endtime = 2  # hours (duration of real motion)
rph = 0.5  # rotations per hour
omega = 2 * pi * rph  # rad/hour
speed = omega * r  # km/hour

# Animation setup
frames = duration * fps
scale = maxdim / r  # scale from km to plot units
dt = endtime / frames  # time per frame in hours

# Prepare figure and axes
fig, ax = plt.subplots(figsize=(6, 6))
ax.set_xlim(-maxdim - 1, maxdim + 1)
ax.set_ylim(-maxdim - 1, maxdim + 1)
ax.set_aspect("equal")
ax.grid()



# Plot circle path
circle = plt.Circle((0, 0), r * scale, color="lightgray", fill=False, linestyle="--")
ax.add_patch(circle)

# Moving point
(point,) = ax.plot([], [], "ro")

# Info text at center of the circle
info_text = ax.text(
    0, 0, "", fontsize=10,
    ha="center", va="center",
    bbox=dict(boxstyle="round,pad=0.4", facecolor="white", alpha=0.8)
)


def init():
    point.set_data([], [])
    info_text.set_text("")
    return point, info_text


def update(frame):
    t = frame * dt  # time in hours
    theta = omega * t  # angle in radians

    x = r * cos(theta) * scale
    y = r * sin(theta) * scale

    point.set_data([x], [y])
    info_text.set_text(
        f"Time: {t:.2f} hr\nRadius: {r:.1f} km\nSpeed: {speed:.2f} km/h"
    )
    return point, info_text



# Create animation
anim = FuncAnimation(
    fig, update, frames=frames, init_func=init, blit=True, interval=1000 / fps
)


with tqdm(total=frames, desc="Saving", unit="frame") as pbar:
    anim.save(filename, fps=fps, progress_callback=lambda i, n: pbar.update(1))

# Add sound using MoviePy
video = VideoFileClip(filename)
video.audio = CompositeAudioClip([AudioFileClip("sound.wav")])
video.write_videofile(filename)

startfile(filename)


    


    Could you figure out what the culprit is and how to fix it ?

    


    enter image description here

    


    Edit

    


    Based on the given comment, I did the following but the problem still exists.

    


    # Add sound using MoviePy
video = VideoFileClip(filename)
audio = AudioFileClip("sound.wav")
audio.duration = video.duration
video.audio = CompositeAudioClip([audio])
video.write_videofile(filename)


    


  • Repeat RTMP with FFMPEG when stream already started

    1er avril 2017, par John Doee

    I have an NGINX RTMP Server setup, unfortunately the playback of the rtmp source is only possible for a few seconds after the specific stream has been published.

    Though connections that have already been opened e.g. Transcoding through FFMPEG works fine without any problems even for a few hours but they have to be started within a few seconds after the video signal is being published.

    So while the stream is transcoded and hence definitely available, FFProbe can’t find the specific stream ending with the following output (Debug mode) :

    [rtmp @ 0x7fadbdc0b5e0] Proto = rtmp, path = /live/4_9_lLV7GhFmTG0w, app = live, fname = 4_9_lLV7GhFmTG0w

    [rtmp @ 0x7fadbdc0b5e0] Server bandwidth = 5000000

    [rtmp @ 0x7fadbdc0b5e0] Client bandwidth = 5000000

    [rtmp @ 0x7fadbdc0b5e0] New incoming chunk size = 4096

    [rtmp @ 0x7fadbdc0b5e0] Creating stream...

    [rtmp @ 0x7fadbdc0b5e0] Sending play command for ’4_9_lLV7GhFmTG0w’

    [rtmp @ 0x7fadbdc0b5e0] Deleting stream...

    rtmp ://***:80/live/4_9_lLV7GhFmTG0w : Input/output error

    (Executing exactly the same command within two or three seconds after / before initial publishing of the video signal succeeds. Transcoding processes last for the whole duration of the stream) It seems that there is missing some header data that is used to identify the current stream after a few seconds.

    Any suggestions on this ? Thank you very much for your help in advance.