Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (61)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (8863)

  • Mix additional audio file with video(+audio) in ffmpeg

    21 mai 2019, par Serg

    I’m trying to mix additional audio file with video which has also audio within. But the problem is that I already have complex ffmpeg command and don’t know how to combine them together.

    This is my existing ffmpeg command which uses some offsets and replaces additional audio file with embedded one (audio inside video) and also overlays few gauges and watermark to the video.

    ffmpeg -y
    -ss 00:00:01:213 -i videoFile.mp4
    -ss 00:00:03:435 -i audioFile.wav
    -i watermark.png
    -framerate 6 -i gauge1_path/img-%04d.png
    -framerate 1 -i gauge2_path/img-%04d.png
    -framerate 2 -i gauge3_path/img-%04d.png
    -framerate 2 -i gauge4_path/img-%04d.png
    -framerate 2 -i gauge5_path/img-%04d.png
    -framerate 2 -i gauge6_path/img-%04d.png
    -filter_complex [0][2]overlay=(21):(H-h-21)[ovr0];
    [ovr0][3]overlay=(W-w-21):(H-h-21)[ovr1];
    [ovr1][4]overlay=(W-w-21):(H-h-333)[ovr2];
    [ovr2][5]overlay=(W-w-21):(H-h-418)[ovr3];
    [ovr3][6]overlay=(W-w-21):(H-h-503)[ovr4];
    [ovr4][7]overlay=(W-w-21):(H-h-588)[ovr5];
    [ovr5][8]overlay=(W-w-21):(H-h-673)
    -map 0:v -map 1:a -c:v libx264 -preset ultrafast -crf 23 -t 00:5:10:000 output.mp4

    Now I would like to use ffmpeg's amix in order to mix both audios instead of replacing them, if possible with ability to set volumes. But official documentation amix says nothing about volume.

    Separately both seems to work ok.

    ffmpeg -y -i video.mp4 -i audio.mp3 -filter_complex [0][1]amix=inputs=2[a] -map 0:v -map [a] -c:v copy output.mp4

    and

    ffmpeg -y -i video.mp4 -i audio.mp3 -i watermark.png -filter_complex [0][2]overlay=(21):(H-h-21)[ovr0] -map [ovr0]:v -map 1:a -c:v libx264 -preset ultrafast -crf 23 output.mp4

    but together

    ffmpeg -y -i video.mp4 -i audio.mp3 -i watermark.png -filter_complex [0][1]amix=inputs=2[a];[a][2]overlay=(21):(H-h-21)[ovr0] -map [ovr0]:v -map [a] -c:v libx264 -preset ultrafast -crf 23 output.mp4

    I’m getting an error :

    ffmpeg version N-93886-gfbdb3aa179 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.3.1 (GCC) 20190414
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 28.100 / 56. 28.100
     libavcodec     58. 52.101 / 58. 52.101
     libavformat    58. 27.103 / 58. 27.103
     libavdevice    58.  7.100 / 58.  7.100
     libavfilter     7. 53.101 /  7. 53.101
     libswscale      5.  4.101 /  5.  4.101
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       creation_time   : 1970-01-01T00:00:00.000000Z
       encoder         : Lavf53.24.2
     Duration: 00:00:29.57, start: 0.000000, bitrate: 1421 kb/s
       Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1032 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         creation_time   : 1970-01-01T00:00:00.000000Z
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 383 kb/s (default)
       Metadata:
         creation_time   : 1970-01-01T00:00:00.000000Z
         handler_name    : SoundHandler
    [mp3 @ 0000015e2f934ec0] Estimating duration from bitrate, this may be inaccurate
    Input #1, mp3, from 'audio.mp3':
     Duration: 00:00:45.33, start: 0.000000, bitrate: 128 kb/s
       Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
    Input #2, png_pipe, from 'watermark.png':
     Duration: N/A, bitrate: N/A
       Stream #2:0: Video: png, rgb24(pc), 100x56 [SAR 3779:3779 DAR 25:14], 25 tbr, 25 tbn, 25 tbc
    [Parsed_amix_0 @ 0000015e2ff2e940] Media type mismatch between the 'Parsed_amix_0' filter output pad 0 (audio) and the 'Parsed_overlay_1' filter input pad 0 (video)
    [AVFilterGraph @ 0000015e2f91c600] Cannot create the link amix:0 -> overlay:0
    Error initializing complex filters.
    Invalid argument

    So my question : whether it’s possible to combine amix and overlay together and how and in which order they should be used ? Or should I look something different because amix unable to set volume levels ?

    Thanks in advance !

  • ffmpeg trouble with NetMaui and Android [closed]

    15 juin 2024, par Billy Vanegas

    I find myself in a following predicament. While I work with Visual Studio 2022, ffmpeg performs flawlessly on Windows platform. However, I'm encountering difficulties when attempting to replicate this functionality on Android. Despite exploring NuGet packages like FFMpegCore, which seem to be ffmpeg wrappers but lack ffmpeg itself, I'm still struggling to find a clear path forward. I've even tried integrating ffmpeg-kit for Android as per the instructions, only to face repeated failures and a sense of confusion. I must admit my puzzlement : why isn't there a straightforward method to seamlessly add ffmpeg to a .NET MAUI project that functions consistently across both iOS and Android platforms ?

    


    I want to convert MP3 files to WAV format using FFmpeg on the Android platform within a .NET MAUI project.

    


    I am using the FFmpegCore library and have downloaded the FFmpeg binaries from the official FFmpeg website. However, I encountered issues when attempting to pass the binary folder on the emulator, when I try to pass it, I get permission denied in working folder :
/data/user/0/com.companyname.projectname/files/ffmpeg on this part of code :

    


             await FFMpegArguments
               .FromFileInput(mp3Path)
               .OutputToFile(wavPath, true, options => options
                   .WithAudioCodec("pcm_s16le")
                   .WithAudioSamplingRate(44100)
                   .WithAudioBitrate(320000)
                   )
               .ProcessAsynchronously();


    


    This is the AndroidManifest.xml

    


    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;<manifest>&#xA;    <application></application>&#xA;    &#xA;    &#xA;    &#xA;    &#xA;</manifest>&#xA;

    &#xA;

     private async Task ConvertMp3ToWav(string mp3Path, string wavPath)&#xA; {&#xA;     try&#xA;     {&#xA;         var directory = Path.GetDirectoryName(wavPath);&#xA;         if (!Directory.Exists(directory))&#xA;             Directory.CreateDirectory(directory!);&#xA;         if (!File.Exists(wavPath))&#xA;             Console.WriteLine($"File not found {wavPath}, creating empty file.");&#xA;             using var fs = new FileStream(wavPath, FileMode.CreateNew);&#xA;         if (!File.Exists(mp3Path))&#xA;             Console.WriteLine($"File not found {mp3Path}");&#xA;&#xA;         string? ffmpegBinaryPath = await ExtractFFmpegBinaries(Platform.AppContext);&#xA;         FFMpegCore.GlobalFFOptions.Configure(new FFOptions { BinaryFolder = Path.GetDirectoryName(ffmpegBinaryPath!)! });&#xA;&#xA;         await FFMpegArguments&#xA;               .FromFileInput(mp3Path)&#xA;               .OutputToFile(wavPath, true, options => options&#xA;                   .WithAudioCodec("pcm_s16le")&#xA;                   .WithAudioSamplingRate(44100)&#xA;                   .WithAudioBitrate(320000)&#xA;                   )&#xA;               .ProcessAsynchronously();&#xA;     }&#xA;     catch (Exception ex)&#xA;     {&#xA;         Console.WriteLine($"An error occurred during the conversion process: {ex.Message}");&#xA;         throw;&#xA;     }&#xA; }&#xA;

    &#xA;

        private async Task<string> ExtractFFmpegBinaries(Context context)&#xA;    {&#xA;        var architectureFolder = "x86"; // "armeabi-v7a";&#xA;        var ffmpegBinaryName = "ffmpeg"; &#xA;        var ffmpegBinaryPath = Path.Combine(context.FilesDir!.AbsolutePath, ffmpegBinaryName);&#xA;        var tempFFMpegFileName = Path.Combine(FileSystem.AppDataDirectory, ffmpegBinaryName);&#xA;&#xA;        if (!File.Exists(ffmpegBinaryPath))&#xA;        {&#xA;            try&#xA;            {&#xA;                var assetPath = $"Libs/{architectureFolder}/{ffmpegBinaryName}";&#xA;                using var assetStream = context.Assets!.Open(assetPath);&#xA;               &#xA;                await using var tempFFMpegFile = File.OpenWrite(tempFFMpegFileName);&#xA;                await assetStream.CopyToAsync(tempFFMpegFile);&#xA;&#xA;                //new MainActivity().RequestStoragePermission();&#xA;                Java.Lang.Runtime.GetRuntime()!.Exec($"chmod 755 {tempFFMpegFileName}");&#xA;            }&#xA;            catch (Exception ex)&#xA;            {&#xA;                Console.WriteLine($"An error occurred while extracting FFmpeg binaries: {ex.Message}");&#xA;                throw;&#xA;            }&#xA;        }&#xA;        else&#xA;        {&#xA;            Console.WriteLine($"FFmpeg binaries already extracted to: {ffmpegBinaryPath}");&#xA;        }&#xA;&#xA;        return tempFFMpegFileName!;&#xA;    }&#xA;</string>

    &#xA;

  • Problems with Python's azure.cognitiveservices.speech when installing together with FFmpeg in a Linux web app

    15 mai 2024, par Kakobo kakobo

    I need some help.&#xA;I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg or when i build and run the container in my machine.

    &#xA;

    I have placed debug print statements in the code. I can see the class initiating, for some reason it does not buffer in the same when when running it locally in my machine. The routine simply stops without any reason.

    &#xA;

    Has anybody experienced a similar issue with azure.cognitiveservices.speech conflicting with ffmpeg ?

    &#xA;

    Here's my Dockerfile :

    &#xA;

    # Use an official Python runtime as a parent imageFROM python:3.11-slim&#xA;&#xA;#Version RunRUN echo "Version Run 1..."&#xA;&#xA;Install ffmpeg&#xA;&#xA;RUN apt-get update &amp;&amp; apt-get install -y ffmpeg &amp;&amp; # Ensure ffmpeg is executablechmod a&#x2B;rx /usr/bin/ffmpeg &amp;&amp; # Clean up the apt cache by removing /var/lib/apt/lists saves spaceapt-get clean &amp;&amp; rm -rf /var/lib/apt/lists/*&#xA;&#xA;//Set the working directory in the container&#xA;&#xA;WORKDIR /app&#xA;&#xA;//Copy the current directory contents into the container at /app&#xA;&#xA;COPY . /app&#xA;&#xA;//Install any needed packages specified in requirements.txt&#xA;&#xA;RUN pip install --no-cache-dir -r requirements.txt&#xA;&#xA;//Make port 80 available to the world outside this container&#xA;&#xA;EXPOSE 8000&#xA;&#xA;//Define environment variable&#xA;&#xA;ENV NAME World&#xA;&#xA;//Run main.py when the container launches&#xA;&#xA;CMD ["streamlit", "run", "main.py", "--server.port", "8000", "--server.address", "0.0.0.0"]`and here&#x27;s my python code:&#xA;

    &#xA;

    def transcribe_audio_continuous_old(temp_dir, audio_file, language):&#xA;    speech_key = azure_speech_key&#xA;    service_region = azure_speech_region&#xA;&#xA;    time.sleep(5)&#xA;    print(f"DEBUG TIME BEFORE speechconfig")&#xA;&#xA;    ran = generate_random_string(length=5)&#xA;    temp_file = f"transcript_key_{ran}.txt"&#xA;    output_text_file = os.path.join(temp_dir, temp_file)&#xA;    speech_recognition_language = set_language_to_speech_code(language)&#xA;    &#xA;    speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)&#xA;    speech_config.speech_recognition_language = speech_recognition_language&#xA;    audio_input = speechsdk.AudioConfig(filename=os.path.join(temp_dir, audio_file))&#xA;        &#xA;    speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input, language=speech_recognition_language)&#xA;    done = False&#xA;    transcript_contents = ""&#xA;&#xA;    time.sleep(5)&#xA;    print(f"DEBUG TIME AFTER speechconfig")&#xA;    print(f"DEBUG FIle about to be passed {audio_file}")&#xA;&#xA;    try:&#xA;        with open(output_text_file, "w", encoding=encoding) as file:&#xA;            def recognized_callback(evt):&#xA;                print("Start continuous recognition callback.")&#xA;                print(f"Recognized: {evt.result.text}")&#xA;                file.write(evt.result.text &#x2B; "\n")&#xA;                nonlocal transcript_contents&#xA;                transcript_contents &#x2B;= evt.result.text &#x2B; "\n"&#xA;&#xA;            def stop_cb(evt):&#xA;                print("Stopping continuous recognition callback.")&#xA;                print(f"Event type: {evt}")&#xA;                speech_recognizer.stop_continuous_recognition()&#xA;                nonlocal done&#xA;                done = True&#xA;            &#xA;            def canceled_cb(evt):&#xA;                print(f"Recognition canceled: {evt.reason}")&#xA;                if evt.reason == speechsdk.CancellationReason.Error:&#xA;                    print(f"Cancellation error: {evt.error_details}")&#xA;                nonlocal done&#xA;                done = True&#xA;&#xA;            speech_recognizer.recognized.connect(recognized_callback)&#xA;            speech_recognizer.session_stopped.connect(stop_cb)&#xA;            speech_recognizer.canceled.connect(canceled_cb)&#xA;&#xA;            speech_recognizer.start_continuous_recognition()&#xA;            while not done:&#xA;                time.sleep(1)&#xA;                print("DEBUG LOOPING TRANSCRIPT")&#xA;&#xA;    except Exception as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;    print("DEBUG DONE TRANSCRIPT")&#xA;&#xA;    return temp_file, transcript_contents&#xA;

    &#xA;

    The transcript this callback works fine locally, or when installed without ffmpeg in the linux web app. Not sure why it conflicts with ffmpeg when installed via container dockerfile. The code section that fails can me found on note #NOTE DEBUG"

    &#xA;