Recherche avancée

Médias (1)

Mot : - Tags -/publishing

Autres articles (108)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (19976)

  • How to convert a Stream on the fly with FFMpegCore ?

    18 octobre 2023, par Adrian

    For a school project, I need to stream videos that I get from torrents while they are downloading on the server.
When the video is a .mp4 file, there's no problem, but I must also be able to stream .mkv files, and for that I need to convert them into .mp4 before sending them to the client, and I can't find a way to convert my Stream that I get from MonoTorrents with FFMpegCore into a Stream that I can send to my client.

    


    Here is the code I wrote to simply download and stream my torrent :

    


    var cEngine = new ClientEngine();

var manager = await cEngine.AddStreamingAsync(GenerateMagnet(torrent), ) ?? throw new Exception("An error occurred while creating the torrent manager");

await manager.StartAsync();
await manager.WaitForMetadataAsync();

var videoFile = manager.Files.OrderByDescending(f => f.Length).FirstOrDefault();
if (videoFile == null)
    return Results.NotFound();

var stream = await manager.StreamProvider!.CreateStreamAsync(videoFile, true);
return Results.File(stream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);


    


    I saw that the most common way to convert videos is by using ffmpeg. .NET has a package called FFMpefCore that is a wrapper for ffmpeg.

    


    To my previous code, I would add right before the return :

    


    if (!videoFile.Path.EndsWith(".mp4"))
{
    var outputStream = new MemoryStream();
    FFMpegArguments
        .FromPipeInput(new StreamPipeSource(stream), options =>
        {
            options.ForceFormat("mp4");
        })
        .OutputToPipe(new StreamPipeSink(outputStream))
        .ProcessAsynchronously();
    return Results.File(outputStream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);
}


    


    I unfortunately can't get a "live" Stream to send to my client.

    


  • how would i play a large list of mp3 and wav files while showing the name of it on the screen with ffmpeg

    4 juillet 2023, par iiDk

    i've been attempting to make a video that is autogenerated using ffmpeg that plays a list of audios and while they are playing it shows the name of the audio file on the screen. i have no idea how to use ffmpeg and i abused ai for the provided script, but it's stupid and doesn't know how to properly reencode the file fixing the bugs at the end which cause the audio to only be on the left channel for no reason and then eventually cutting out.

    


    import os
import subprocess

def create_combined_video(audio_folder, output_path):
    # Get a list of audio files in the specified folder
    audio_files = []
    for filename in os.listdir(audio_folder):
        if filename.endswith(".mp3") or filename.endswith(".wav"):
            audio_files.append(os.path.join(audio_folder, filename))

    # Sort the audio files alphabetically
    audio_files.sort()

    # Create a folder to store the temporary image frames
    temp_frames_folder = "temp_frames"
    os.makedirs(temp_frames_folder, exist_ok=True)

    # Generate the image frames with the corresponding audio file names
    for index, audio_file in enumerate(audio_files):
        name = os.path.splitext(os.path.basename(audio_file))[0]
        image_path = os.path.join(temp_frames_folder, f"{index+1:06d}.jpg")

        # Use FFmpeg to create the image frame with text overlay
        ffmpeg_cmd = f'ffmpeg -y -f lavfi -i color=c=white:s=720x480:d=1 -vf "drawtext=text=\'{name}\':fontcolor=black:fontsize=36:x=(w-text_w)/2:y=(h-text_h)/2" -vframes 1 "{image_path}"'
        subprocess.run(ffmpeg_cmd, shell=True)

    # Generate a text file containing the image file names for each audio
    image_names_path = "image_names.txt"
    with open(image_names_path, "w") as file:
        for index, audio_file in enumerate(audio_files):
            image_path = os.path.join(temp_frames_folder, f"{index+1:06d}.jpg")
            duration = get_audio_duration(audio_file)
            file.write(f"file '{image_path}'\nduration {duration}\n")

    # Generate a text file containing the audio file names
    audio_names_path = "audio_names.txt" 
    with open(audio_names_path, "w") as file:
        for audio_file in audio_files:
            file.write(f"file '{audio_file}'\n")

    # Re-encode the audio files with a common codec (AAC)
    reencoded_audio_folder = "reencoded_audio"
    os.makedirs(reencoded_audio_folder, exist_ok=True)
    for index, audio_file in enumerate(audio_files):
        reencoded_audio_file = os.path.join(reencoded_audio_folder, f"{index:03d}.m4a")
        ffmpeg_cmd = f'ffmpeg -y -i "{audio_file}" -c:a aac "{reencoded_audio_file}"'
        subprocess.run(ffmpeg_cmd, shell=True)

    # Generate a text file containing the re-encoded audio file names
    reencoded_audio_names_path = "reencoded_audio_names.txt"
    with open(reencoded_audio_names_path, "w") as file:
        for index, audio_file in enumerate(audio_files):
            reencoded_audio_file = os.path.join(reencoded_audio_folder, f"{index:03d}.m4a")
            file.write(f"file '{reencoded_audio_file}'\n")

    # Use FFmpeg to generate the video with the image frames and re-encoded audio
    ffmpeg_cmd = f'ffmpeg -y -f concat -safe 0 -i "{image_names_path}" -f concat -safe 0 -i "{reencoded_audio_names_path}" -c:v libx264 -pix_fmt yuv420p -vf "scale=720:480:force_original_aspect_ratio=increase,crop=720:480" -c:a aac -shortest "{output_path}"'
    subprocess.run(ffmpeg_cmd, shell=True)

    # Clean up temporary files and folders
    os.remove(image_names_path)
    os.remove(audio_names_path)
    for image_file in os.listdir(temp_frames_folder):
        os.remove(os.path.join(temp_frames_folder, image_file))
    os.rmdir(temp_frames_folder)
    for audio_file in os.listdir(reencoded_audio_folder):
        os.remove(os.path.join(reencoded_audio_folder, audio_file))
    os.rmdir(reencoded_audio_folder)

def get_audio_duration(audio_file): 
    # Use FFprobe to get the duration of the audio file
    ffprobe_cmd = f'ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "{audio_file}"'
    result = subprocess.run(ffprobe_cmd, shell=True, capture_output=True, text=True)
    duration = float(result.stdout.strip())
    return duration

# Usage example
audio_folder = "C:/Users/Admin/Desktop/Sounds"
output_path = "C:/Users/Admin/Desktop/output.mp4"
create_combined_video(audio_folder, output_path)


    


    i've tried yelling at ai to fix the bug and all it does is break the script instead of doing what i asked it to, but i believe all it has to do is fix reencoding

    


  • DirectX12 Video Encoding : output buffer population

    6 janvier 2023, par mike

    I'm attempting to implement DX12 video encoding.

    


      

    • To date I've been using the ffmpeg library so am not very clued up on very low level data.
    • 


    • I'm using the simplest possible encoding I can think of, with GOP size of 1 and H264
    • 


    


    I am struggling with the first part of defining the output structure D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS, namely setting up the resource pBuffer in D3D12_VIDEO_ENCODER_COMPRESSED_BITSTREAM :

    


    typedef struct D3D12_VIDEO_ENCODER_COMPRESSED_BITSTREAM {
  ID3D12Resource *pBuffer;
  UINT64         FrameStartOffset;
} D3D12_VIDEO_ENCODER_COMPRESSED_BITSTREAM;


    


    Regards pBuffer, the docs say :

    


    


    "A pointer to a ID3D12Resource containing the compressed bitstream buffer".

    


    


      

    • So I guess I create a buffer at least the size of the input frame + room for header data, and make it writeable - seems like it should be straightforward, but am I missing something ? Should it be multi-planar for example, or be some multiple of input frame size ?
    • 


    


    Then :

    


    


    "The output bitstream is expected to contain the subregion headers, but not the picture, sequence, video or other headers. The host is responsible for coding those headers and generating the complete bitstream."

    


    


      

    • what do these subregion headers look like ? I am showing my lack of encoding knowledge here in general, is there a resource somewhere explaining how to calculate them ? (or have I misread this, and this is saying the output will contain them)
    • 


    • do I just write them by copying into mapped memory and setting the FrameStartOffset to point after the header data ?
    • 


    • I'm currently streaming AVPackets from ffmpeg using libdatachannel, how would the content of the output (without my adding extra headers) compare to an AVPacket->data ?
    •