Recherche avancée

Médias (91)

Autres articles (68)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

Sur d’autres sites (8033)

  • Failed to add audio to mp4 with moviepy

    24 avril, par D G

    I generated an audio sound.wav and a video temp.mp4 that makes use of the audio. Both with the same duration.

    


    I got the following warning when generating the temp.mp4. The animation is out of sync, it means that it freezes before the audio finishes.

    


    


    .venv\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py:178 : UserWarning : In file temp.mp4, 1080000 bytes wanted but 0
bytes read at frame index 299 (out of a total 300 frames), at time 4.98/5.00 sec. Using the last valid frame instead.
warnings.warn(

    


    


    Complete code :

    


    # to generate sound.wav
import numpy as np
import soundfile as sf
from tqdm import tqdm
from os import startfile

# Parameters
filename = "sound.wav"
duration = 5  # seconds
num_voices = 1000
sample_rate = 44100  # Hz
chunk_size = sample_rate  # write in 1-second chunks


t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)

# Create many detuned sine waves (Deep Note style)
start_freqs = np.random.uniform(100, 400, num_voices)
end_freqs = np.linspace(400, 800, num_voices)  # target harmony

# Each voice sweeps from start to end frequency
signal = np.zeros_like(t)
for i in range(num_voices):
    freqs = np.linspace(start_freqs[i], end_freqs[i], t.size)
    voice = np.sin(2 * np.pi * freqs * t)
    voice *= np.sin(np.pi * i / num_voices)  # slight variance
    signal += voice

# Volume envelope
envelope = np.linspace(0.01, 1.0, t.size)
signal *= envelope

# Normalize
signal /= np.max(np.abs(signal))

# Save with progress bar using soundfile
with sf.SoundFile(
    filename, "w", samplerate=sample_rate, channels=1, subtype="PCM_16"
) as f:
    for i in tqdm(range(0, len(signal), chunk_size), desc=f"Saving {filename}"):
        f.write(signal[i : i + chunk_size])


startfile(filename)


    


    # to generate temp.mp4
from numpy import pi, sin, cos
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from os import startfile
from tqdm import tqdm
from moviepy import VideoFileClip, AudioFileClip, CompositeAudioClip

# Output settings
filename = "temp.mp4"
duration = 5  # seconds of animation
maxdim = 4  # canvas size in plot units (scaled)
fps = 60

# Real-world parameters
r = 40  # km (radius)
endtime = 2  # hours (duration of real motion)
rph = 0.5  # rotations per hour
omega = 2 * pi * rph  # rad/hour
speed = omega * r  # km/hour

# Animation setup
frames = duration * fps
scale = maxdim / r  # scale from km to plot units
dt = endtime / frames  # time per frame in hours

# Prepare figure and axes
fig, ax = plt.subplots(figsize=(6, 6))
ax.set_xlim(-maxdim - 1, maxdim + 1)
ax.set_ylim(-maxdim - 1, maxdim + 1)
ax.set_aspect("equal")
ax.grid()



# Plot circle path
circle = plt.Circle((0, 0), r * scale, color="lightgray", fill=False, linestyle="--")
ax.add_patch(circle)

# Moving point
(point,) = ax.plot([], [], "ro")

# Info text at center of the circle
info_text = ax.text(
    0, 0, "", fontsize=10,
    ha="center", va="center",
    bbox=dict(boxstyle="round,pad=0.4", facecolor="white", alpha=0.8)
)


def init():
    point.set_data([], [])
    info_text.set_text("")
    return point, info_text


def update(frame):
    t = frame * dt  # time in hours
    theta = omega * t  # angle in radians

    x = r * cos(theta) * scale
    y = r * sin(theta) * scale

    point.set_data([x], [y])
    info_text.set_text(
        f"Time: {t:.2f} hr\nRadius: {r:.1f} km\nSpeed: {speed:.2f} km/h"
    )
    return point, info_text



# Create animation
anim = FuncAnimation(
    fig, update, frames=frames, init_func=init, blit=True, interval=1000 / fps
)


with tqdm(total=frames, desc="Saving", unit="frame") as pbar:
    anim.save(filename, fps=fps, progress_callback=lambda i, n: pbar.update(1))

# Add sound using MoviePy
video = VideoFileClip(filename)
video.audio = CompositeAudioClip([AudioFileClip("sound.wav")])
video.write_videofile(filename)

startfile(filename)


    


    Could you figure out what the culprit is and how to fix it ?

    


    enter image description here

    


    Edit

    


    Based on the given comment, I did the following but the problem still exists.

    


    # Add sound using MoviePy
video = VideoFileClip(filename)
audio = AudioFileClip("sound.wav")
audio.duration = video.duration
video.audio = CompositeAudioClip([audio])
video.write_videofile(filename)


    


  • FFmpeg video has an excessively long duration

    8 juillet 2023, par daraem

    When using ytdl-core and ffmpeg-static to download high quality youtube videos, the output video is supposedly thousands of hours long, which makes it not let me advance the video in a media player. The error only occurs in windows 10 players. In VLC or Discord it does not happen.

    


            res.header("Content-Disposition", `attachment;  filename=video.mp4`)

        let video = ytdl(link, {
            filter: 'videoonly'
        })
        let audio = ytdl(link, {
            filter: 'audioonly',
            highWaterMark: 1 << 25
        });
        const ffmpegProcess = cp.spawn(ffmpeg, [
            '-i', `pipe:3`,
            '-i', `pipe:4`,
            '-map', '1:0',
            '-map', '0:0',
            '-vcodec', 'libx264',
            '-c:v', 'libx264',
            '-c:a', 'aac',
            '-crf', '27',
            '-preset', 'veryslow',
            '-b:v', '1500k',
            '-b:a', '128k',
            '-movflags', 'frag_keyframe+empty_moov',
            '-f', 'mp4',
            '-loglevel', 'error',
            '-',
        ], {
            stdio: [
                'pipe', 'pipe', 'pipe', 'pipe', 'pipe',
            ],
        });
    
        video.pipe(ffmpegProcess.stdio[3]);
        audio.pipe(ffmpegProcess.stdio[4]);
        ffmpegProcess.stdio[1].pipe(res);
    
        let ffmpegLogs = ''
    
        ffmpegProcess.stdio[2].on(
            'data',
            (chunk) => {
                ffmpegLogs += chunk.toString()
            }
        )
    
        ffmpegProcess.on(
            'exit',
            (exitCode) => {
                if (exitCode === 1) {
                    console.error(ffmpegLogs)
                }
            }
        )


    


    I've tried changing the codecs options. But I'm not sure what I'm doing

    


  • Collect AVFrames into buffer

    24 novembre 2020, par mgukov

    I'm collect AVFrames into array and then free them but this causes memory leak.

    



    extern "C" {&#xA;#include <libavutil></libavutil>frame.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;}&#xA;&#xA;#include <vector>&#xA;#include <iostream>&#xA;&#xA;AVFrame * createFrame() {&#xA;    int width = 1280;&#xA;    int height = 720;&#xA;    AVPixelFormat format = AV_PIX_FMT_YUV420P;&#xA;    int buffer_size = av_image_get_buffer_size(format, width, height, 1);&#xA;    uint8_t * buffer = (uint8_t *)av_malloc(buffer_size * sizeof(uint8_t));&#xA;    memset(buffer, 1, buffer_size);&#xA;&#xA;    uint8_t *src_buf[4];&#xA;    int      src_linesize[4];&#xA;    av_image_fill_arrays(src_buf, src_linesize, buffer, format, width, height, 1);&#xA;&#xA;    AVFrame * frame = av_frame_alloc();&#xA;    frame->width = width;&#xA;    frame->height = height;&#xA;    frame->format = format;&#xA;    av_frame_get_buffer(frame, 0);&#xA;    av_image_copy(frame->data, frame->linesize,&#xA;                  const_cast<const>(src_buf), const_cast<const>(src_linesize),&#xA;                  format, width, height);&#xA;    av_free(buffer);&#xA;    return frame;&#xA;}&#xA;&#xA;int main(int argc, char *argv[]) {&#xA;    uint32_t count = 1024;&#xA;&#xA;    // fill array with frames&#xA;    std::vector list;&#xA;    for (uint64_t i = 0; i &lt; count; &#x2B;&#x2B;i) {&#xA;        list.push_back(createFrame());&#xA;    }&#xA;    // allocated 1385 mb in heap&#xA;&#xA;    // clear all allocated data&#xA;    for (auto i = list.begin(); i &lt; list.end(); &#x2B;&#x2B;i) {&#xA;        if (*i != NULL) {&#xA;            av_frame_free(&amp;(*i));&#xA;        }&#xA;    }&#xA;    list.clear();&#xA;&#xA;    // memory-leak of > 360 Mb&#xA;}&#xA;</const></const></iostream></vector>

    &#xA;&#xA;

    But if just create frame and immediatly free it without saving it into vector, no memory leak, despite the fact that the same number of frames was created.

    &#xA;&#xA;

    What i'm doing wrong ?

    &#xA;&#xA;

    UPDATE :

    &#xA;&#xA;

    I was wrong. There is no memory leak here(checked by valgrind), but the freed memory does not immediately return to the operating system, this confused me.

    &#xA;