Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (61)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

Sur d’autres sites (6420)

  • FFmpeg Integration in .NET MAUI for Android [closed]

    15 juin 2024, par Billy Vanegas

    I'm facing a challenge with integrating FFmpeg into my .NET MAUI project for Android. While everything works smoothly on Windows with Visual Studio 2022, I'm having a hard time replicating this on the Android platform. Despite exploring various NuGet packages like FFMpegCore, which appear to be wrappers around FFmpeg but don't include FFmpeg itself, I'm still at a loss.

    


    I've tried following the instructions for integrating ffmpeg-kit for Android, but I keep running into issues, resulting in repeated failures and growing confusion. It feels like there is no straightforward way to seamlessly incorporate FFmpeg into a .NET MAUI project that works consistently across both iOS and Android.

    


    The Problem :

    


    I need to convert MP3 files to WAV format using FFmpeg on the Android platform within a .NET MAUI project. I’m using the FFMpegCore library and have downloaded the FFmpeg binaries from the official FFmpeg website.

    


    However, when attempting to use these binaries on an Android emulator, I encounter a permission denied error in the working directory : /data/user/0/com.companyname.projectname/files/ffmpeg

    


    Here’s the code snippet where the issue occurs :

    


    await FFMpegArguments
      .FromFileInput(mp3Path)
      .OutputToFile(wavPath, true, options => options
          .WithAudioCodec("pcm_s16le")
          .WithAudioSamplingRate(44100)
          .WithAudioBitrate(320000)
          )
      .ProcessAsynchronously();
    


    


    I've updated AndroidManifest.xml with permissions, but the issue persists.

    


    I've created a method ConvertMp3ToWav to handle the conversion.
I also have a method ExtractFFmpegBinaries to manage FFmpeg binaries extraction, but it seems the permission issue might be tied to how these binaries are accessed or executed.

    


    AndroidManifest.xml :

    


    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;<manifest>&#xA;    <application></application>&#xA;    &#xA;    &#xA;    &#xA;    &#xA;</manifest>&#xA;

    &#xA;

    Method ConvertMp3ToWav :

    &#xA;

    private async Task ConvertMp3ToWav(string mp3Path, string wavPath)&#xA;{&#xA;    try&#xA;    {&#xA;        // Check directory and create if not exists&#xA;        var directory = Path.GetDirectoryName(wavPath);&#xA;        if (!Directory.Exists(directory))&#xA;            Directory.CreateDirectory(directory!);&#xA;&#xA;        // Check if WAV file exists&#xA;        if (!File.Exists(wavPath))&#xA;            Console.WriteLine($"File not found {wavPath}, creating empty file.");&#xA;            using var fs = new FileStream(wavPath, FileMode.CreateNew);&#xA;&#xA;        // Check if MP3 file exists&#xA;        if (!File.Exists(mp3Path))&#xA;            Console.WriteLine($"File not found {mp3Path}");&#xA;&#xA;        // Extract FFmpeg binaries&#xA;        string? ffmpegBinaryPath = await ExtractFFmpegBinaries(Platform.AppContext);&#xA;&#xA;        // Configure FFmpeg options&#xA;        FFMpegCore.GlobalFFOptions.Configure(new FFOptions { BinaryFolder = Path.GetDirectoryName(ffmpegBinaryPath!)! });&#xA;&#xA;        // Convert MP3 to WAV&#xA;        await FFMpegArguments&#xA;              .FromFileInput(mp3Path)&#xA;              .OutputToFile(wavPath, true, options => options&#xA;                  .WithAudioCodec("pcm_s16le")&#xA;                  .WithAudioSamplingRate(44100)&#xA;                  .WithAudioBitrate(320000)&#xA;                  )&#xA;              .ProcessAsynchronously();&#xA;    }&#xA;    catch (Exception ex)&#xA;    {&#xA;        Console.WriteLine($"An error occurred during the conversion process: {ex.Message}");&#xA;        throw;&#xA;    }&#xA;}&#xA;

    &#xA;

    Method ExtractFFmpegBinaries :

    &#xA;

    private async Task<string> ExtractFFmpegBinaries(Context context)&#xA;{&#xA;    var architectureFolder = "x86"; // Adjust according to device architecture&#xA;    var ffmpegBinaryName = "ffmpeg"; &#xA;    var ffmpegBinaryPath = Path.Combine(context.FilesDir!.AbsolutePath, ffmpegBinaryName);&#xA;    var tempFFMpegFileName = Path.Combine(FileSystem.AppDataDirectory, ffmpegBinaryName);&#xA;&#xA;    if (!File.Exists(ffmpegBinaryPath))&#xA;    {&#xA;        try&#xA;        {&#xA;            var assetPath = $"Libs/{architectureFolder}/{ffmpegBinaryName}";&#xA;            using var assetStream = context.Assets!.Open(assetPath);&#xA;           &#xA;            await using var tempFFMpegFile = File.OpenWrite(tempFFMpegFileName);&#xA;            await assetStream.CopyToAsync(tempFFMpegFile);&#xA;&#xA;            // Adjust permissions for FFmpeg binary&#xA;            Java.Lang.Runtime.GetRuntime()!.Exec($"chmod 755 {tempFFMpegFileName}");&#xA;        }&#xA;        catch (Exception ex)&#xA;        {&#xA;            Console.WriteLine($"An error occurred while extracting FFmpeg binaries: {ex.Message}");&#xA;            throw;&#xA;        }&#xA;    }&#xA;    else&#xA;    {&#xA;        Console.WriteLine($"FFmpeg binaries already extracted to: {ffmpegBinaryPath}");&#xA;    }&#xA;&#xA;    return tempFFMpegFileName!;&#xA;}&#xA;</string>

    &#xA;

    What I Need :

    &#xA;

    How to correctly integrate and use FFmpeg in my .NET MAUI project for Android ? Specifically :

    &#xA;

      &#xA;
    • How to properly set up and configure FFmpeg binaries for use on Android within a .NET MAUI project.
    • &#xA;

    • How to resolve the permission denied issue when attempting to execute FFmpeg binaries.
    • &#xA;

    &#xA;

  • How To Play Hardware Accelerated Video on A Mac

    28 mai 2013, par Multimedia Mike — General

    I have a friend who was considering purchasing a Mac Mini recently. At the time of this writing, there are 3 desktop models (and 2 more “server” models).


    Apple Mac Mini

    The cheapest one is a Core i5 2.5 GHz. Then there are 2 Core i7 models : 2.3 GHz and 2.6 GHz. The difference between the latter 2 is US$100. The only appreciable technical difference is the extra 0.3 GHz and the choice came down to those 2.

    He asked me which one would be able to play HD video at full frame rate. I found this query puzzling. But then, I have been “in the biz” for a bit too long. Whether or not a computer or device can play a video well depends on a lot of factors.

    Hardware Support
    First of all, looking at the raw speed of the general-purpose CPU inside of a computer as a gauge of video playback performance is generally misguided in this day and age. In general, we have a video standard (H.264, which I’ll focus on for this post) and many bits of hardware are able to accelerate decoding. So, the question is not whether the CPU can decode the data in real time, but can any other hardware in the device (likely the graphics hardware) handle it ? These machines have Intel HD 4000 graphics and, per my reading of the literature, they are capable of accelerating H.264 video decoding.

    Great, so the hardware supports accelerated decoding. So it’s a done deal, right ? Not quite…

    Operating System Support
    An application can’t do anything pertaining to hardware without permission from the operating system. So the next question is : Does Mac OS X allow an application to access accelerated video decoding hardware if it’s available ? This used to be a contentious matter (notably, Adobe Flash Player was unable to accelerate H.264 playback on Mac in the absence of such an API) but then Apple released an official API detailed in Technical Note TN2267.

    So, does this mean that video is magically accelerated ? Nope, we’re still not there yet…

    Application Support
    It’s great that all of these underlying pieces are in place, but if an individual application chooses to decode the video directly on the CPU, it’s all for naught. An application needs to query the facilities and direct data through the API if it wants to leverage the acceleration. Obviously, at this point it becomes a matter of “which application ?”

    My friend eventually opted to get the pricier of the desktop Mac Mini models and we ran some ad-hoc tests since I was curious how widespread the acceleration support is among Mac multimedia players. Here are some programs I wanted to test, playing 1080p H.264 :

    • Apple QuickTime Player
    • VLC
    • YouTube with Flash Player (any browser)
    • YouTube with Safari/HTML5
    • YouTube with Chrome/HTML5
    • YouTube with Firefox/HTML5
    • Netflix

    I didn’t take exhaustive notes but my impromptu tests revealed QuickTime Player was, far and away, the most performant player, occupying only around 5% of the CPU according to the Mac OS X System Profiler graph (which is likely largely spent on audio decoding).

    VLC consistently required 20-30% CPU, so it’s probably leveraging some acceleration facilities. I think that Flash Player and the various HTML5 elements performed similarly (their multi-process architectures can make such a trivial profiling test difficult).

    The outlier was Netflix running in Firefox via Microsoft’s Silverlight plugin. Of course, the inner workings of Netflix’s technology are opaque to outsiders and we don’t even know if it uses H.264. It may very well use Microsoft’s VC-1 which is not a capability provided by the Mac OS X acceleration API (it doesn’t look like the Intel HD 4000 chip can handle it either). I have never seen any data one way or another about how Netflix encodes video. However, I was able to see that Netflix required an enormous amount of CPU muscle on the Mac platform.

    Conclusion
    The foregoing is a slight simplification of the video playback pipeline. There are some other considerations, most notably how the video is displayed afterwards. To circle back around to the original question : Can the Mac Mini handle full HD video playback ? As my friend found, the meager Mac Mini can do an admirable job at playing full HD video without loading down the CPU.

  • FFmpeg filter config with aecho fails to configure all the links and formats - avfilter_graph_config

    23 janvier 2021, par cs guy

    I am following the official tutorial of FFMpeg to create a filter chain. This tutorial shows how to pass data through a chain as :

    &#xA;

    &#xA;

    The filter chain it uses is : * (input) -> abuffer -> volume ->&#xA;aformat -> abuffersink -> (output)

    &#xA;

    &#xA;

    Here is my code - sorry for boiler code, it is just ffmpeg way :(

    &#xA;

        frame = av_frame_alloc();&#xA;    filterGraph = avfilter_graph_alloc();&#xA;&#xA;    if (!frame) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not allocate memory for frame");&#xA;        return;&#xA;    }&#xA;&#xA;    if (!filterGraph) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor FXProcessor! %s", av_err2str(AVERROR(ENOMEM)));&#xA;        return;&#xA;    }&#xA;&#xA;    const AVFilter *abuffer;&#xA;    const AVFilter *abuffersink;&#xA;    AVFilterContext *aformat_ctx;&#xA;    const AVFilter *aformat;&#xA;    AVFilterContext *choisen_beat_fx_ctx;&#xA;    const AVFilter *choisen_beat_fx;&#xA;&#xA;    /* Create the abuffer filter;&#xA;     * it will be used for feeding the data into the graph. */&#xA;    abuffer = avfilter_get_by_name("abuffer");&#xA;    if (!abuffer) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not find the abuffer filter!");&#xA;        return;&#xA;    }&#xA;    abuffer_ctx = avfilter_graph_alloc_filter(filterGraph, abuffer, "src");&#xA;    if (!abuffer_ctx) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not allocate the abuffer_ctx instance! %s",&#xA;             av_err2str(AVERROR(ENOMEM)));&#xA;        return;&#xA;    }&#xA;&#xA;    char ch_layout[64];&#xA;    /* Set the filter options through the AVOptions API. */&#xA;    av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, AV_CH_LAYOUT_STEREO);&#xA;    av_opt_set(abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);&#xA;    av_opt_set(abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(AV_SAMPLE_FMT_FLT),&#xA;               AV_OPT_SEARCH_CHILDREN);&#xA;    av_opt_set_q(abuffer_ctx, "time_base", (AVRational) {1, defaultSampleRate},&#xA;                 AV_OPT_SEARCH_CHILDREN);&#xA;    av_opt_set_int(abuffer_ctx, "sample_rate", defaultSampleRate, AV_OPT_SEARCH_CHILDREN);&#xA;    /* Now initialize the filter; we pass NULL options, since we have already&#xA;     * set all the options above. */&#xA;&#xA;    if (avfilter_init_str(abuffer_ctx, nullptr) &lt; 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not initialize the abuffer filter!");&#xA;        return;&#xA;    }&#xA;&#xA;    // TODO: select FX&#x27;s dynamically&#xA;    /* Create aecho filter. */&#xA;    if (true) {&#xA;&#xA;        choisen_beat_fx = avfilter_get_by_name("volume");&#xA;        if (!choisen_beat_fx) {&#xA;            *mediaLoadPointer = FAILED_TO_LOAD;&#xA;            LOGE("FXProcessor::FXProcessor Could not find the aecho filter!");&#xA;            return;&#xA;        }&#xA;&#xA;        choisen_beat_fx_ctx = avfilter_graph_alloc_filter(filterGraph, choisen_beat_fx, "echo");&#xA;        if (!choisen_beat_fx_ctx) {&#xA;            *mediaLoadPointer = FAILED_TO_LOAD;&#xA;            LOGE("FXProcessor::FXProcessor Could not allocate the choisen_beat_fx_ctx instance! %s",&#xA;                 av_err2str(AVERROR(ENOMEM)));&#xA;            return;&#xA;        }&#xA;&#xA;        av_opt_set    (choisen_beat_fx_ctx, "volume",     AV_STRINGIFY(0.5), AV_OPT_SEARCH_CHILDREN);&#xA;&#xA;        if (avfilter_init_str(choisen_beat_fx_ctx, nullptr) &lt; 0) {&#xA;            *mediaLoadPointer = FAILED_TO_LOAD;&#xA;            LOGE("FXProcessor::FXProcessor Could not initialize the choisen_beat_fx_ctx filter!");&#xA;            return;&#xA;        }&#xA;    }&#xA;&#xA;    /* Create the aformat filter;&#xA;     * it ensures that the output is of the format we want. */&#xA;    aformat = avfilter_get_by_name("aformat");&#xA;    if (!aformat) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not find the aformat filter!");&#xA;        return;&#xA;    }&#xA;    aformat_ctx = avfilter_graph_alloc_filter(filterGraph, aformat, "aformat");&#xA;    if (!aformat_ctx) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not allocate the aformat instance!");&#xA;        return;&#xA;    }&#xA;&#xA;    av_opt_set(aformat_ctx, "sample_fmts", av_get_sample_fmt_name(AV_SAMPLE_FMT_FLT),&#xA;               AV_OPT_SEARCH_CHILDREN);&#xA;    av_opt_set_int(aformat_ctx, "sample_rates", defaultSampleRate, AV_OPT_SEARCH_CHILDREN);&#xA;    av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, AV_CH_LAYOUT_STEREO);&#xA;    av_opt_set(aformat_ctx, "channel_layouts", ch_layout, AV_OPT_SEARCH_CHILDREN);&#xA;&#xA;    if (avfilter_init_str(aformat_ctx, nullptr) &lt; 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not initialize the aformat filter!");&#xA;        return;&#xA;    }&#xA;&#xA;    /* Finally create the abuffersink filter;&#xA;     * it will be used to get the filtered data out of the graph. */&#xA;    abuffersink = avfilter_get_by_name("abuffersink");&#xA;    if (!abuffersink) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not find the abuffersink filter!");&#xA;        return;&#xA;    }&#xA;&#xA;    abuffersink_ctx = avfilter_graph_alloc_filter(filterGraph, abuffersink, "sink");&#xA;    if (!abuffersink_ctx) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not allocate the abuffersink instance!");&#xA;        return;&#xA;    }&#xA;&#xA;    /* This filter takes no options. */&#xA;    if (avfilter_init_str(abuffersink_ctx, nullptr) &lt; 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Could not initialize the abuffersink instance.!");&#xA;        return;&#xA;    }&#xA;&#xA;    /* Connect the filters;&#xA;     * in this simple case the filters just form a linear chain. */&#xA;    if (avfilter_link(abuffer_ctx, 0, choisen_beat_fx_ctx, 0) != 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Error connecting filters.!");&#xA;        return;&#xA;    }&#xA;    if (avfilter_link(choisen_beat_fx_ctx, 0, aformat_ctx, 0) != 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Error connecting filters.!");&#xA;        return;&#xA;    }&#xA;    if (avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0) != 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Error connecting filters.!");&#xA;        return;&#xA;    }&#xA;&#xA;    /* Configure the graph. */&#xA;    if (avfilter_graph_config(filterGraph, nullptr) &lt; 0) {&#xA;        *mediaLoadPointer = FAILED_TO_LOAD;&#xA;        LOGE("FXProcessor::FXProcessor Error configuring the filter graph!");&#xA;        return;&#xA;    }&#xA;

    &#xA;

    This code works fine when the chain is

    &#xA;

    &#xA;
      &#xA;
    • (input) -> abuffer -> aecho-> aformat -> abuffersink -> (output)
    • &#xA;

    &#xA;

    &#xA;

    However, I would like to use adelay instead of volume filter. So I want :

    &#xA;

    &#xA;

    The filter chain it uses is : * (input) -> abuffer -> volume ->&#xA;aformat -> abuffersink -> (output)

    &#xA;

    &#xA;

    I changed the code at

    &#xA;

    choisen_beat_fx = avfilter_get_by_name("volume");&#xA;

    &#xA;

    to

    &#xA;

    choisen_beat_fx = avfilter_get_by_name("aecho");&#xA;

    &#xA;

    and removed the line

    &#xA;

    av_opt_set    (choisen_beat_fx_ctx, "volume",     AV_STRINGIFY(0.5), AV_OPT_SEARCH_CHILDREN);&#xA;

    &#xA;

    everything goes smooth until the last line.&#xA;avfilter_graph_config fails and returns negative value. Functions document :

    &#xA;

    &#xA;

    avfilter_graph_config : Check validity and configure all the links and&#xA;formats in the graph.

    &#xA;

    &#xA;

    So my guess is I need extra links to insert aecho to my chain ? How can I insert aecho into my filter chain ?

    &#xA;