Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (29)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (3751)

  • FFMPEG Overlay Fade In/Out

    2 août 2017, par bluesummers

    Trying to create a fading in/out overlay with ffmpeg,
    following these links - 1, 2, 3

    I can’t figure out why, but I can’t seem to get it to work.

    Static wattermark - works fine with this code

    ffmpeg -i video.mp4 -i wattermark.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy -y result.mp4

    Changing it to

    ffmpeg -i video.mp4 -i wattermark.png -filter_complex "[1:0] format=rgba,fade=in:st=0:d=3:alpha=1,fade=out:st=6:d=3:alpha=1 [ovr];[0:0][ovr] overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy -y result.mp4

    Results in nothing happening at all...

    What am I doing wrong ?

  • Failed to execute : 0x80070057, when decoding video via ffmpeg with dxva2

    25 mars 2019, par CD83

    I have successfully implemented a video player using ffmpeg. I am now trying to use hardware decoding but I’m facing a couple issues.
    I found a post that I followed as a starting point here : https://ffmpeg.org/pipermail/libav-user/2014-August/007323.html

    I have updated the code that setup the necessary stuff for the decoder. The updated code is available here : https://drive.google.com/file/d/0B5ufHdoDzA4ieVk5UVpxcDNzRHc/view?usp=sharing

    And this is how I’m using it to initialize the decoder :

    // Prepare the decoding context
    AVCodec *codec = nullptr;
    _codecContext = _avFormatContext->streams[_streamIndex]->codec;
    if ((codec = avcodec_find_decoder(_codecContext->codec_id)) == 0)
    {
       std::cout << "Unsupported video codec!" << std::endl;
       return false;
    }

    _codecContext->thread_count = 1;  // Multithreading is apparently not compatible with hardware decoding
    InputStream *ist = new InputStream();
    ist->hwaccel_id = HWACCEL_AUTO;
    ist->hwaccel_device = "dxva2";
    ist->dec = codec;
    ist->dec_ctx = _codecContext;
    _codecContext->coded_width = _width;
    _codecContext->coded_height = _height;

    _codecContext->opaque = ist;
    dxva2_init(_codecContext);

    _codecContext->get_buffer2 = ist->hwaccel_get_buffer;
    _codecContext->get_format = GetHwFormat;
    _codecContext->thread_safe_callbacks = 1;

    if (avcodec_open2(_codecContext, codec, nullptr) < 0)
    {
       std::cout << "Video codec open error" << std::endl;
       return false;
    }

    And here is the definition of GetHwFormat referenced above :

    AVPixelFormat GetHwFormat(AVCodecContext *s, const AVPixelFormat *pix_fmts)
    {
       InputStream* ist = (InputStream*)s->opaque;
       ist->active_hwaccel_id = HWACCEL_DXVA2;
       ist->hwaccel_pix_fmt = AV_PIX_FMT_DXVA2_VLD;
       return ist->hwaccel_pix_fmt;
    }

    When I open an mp4 (encoded in h264) video that is HD resolution or less, everything seems to be working fine. However, as soon as I try higher resolution videos like 3840x2160, I get the following errors repeatedly :

    Failed to execute: 0x80070057
    Hardware accelerator failed to decode picture

    I also start getting the following errors after a few seconds :

    co located POCs unavailable

    And the video is not displayed properly : I get a lot of artifacts all over the video and it is lagging. I checked the first error in the ffmpeg source code. It seems that IDirectXVideoDecoder_Execute fails because of an invalid parameter. Since this is happening withing ffmpeg, there must be something that I’m missing but I can’t figure out what. The only relevant post that I found with this error was because of multithreading but I set the thread_count to 1 before opening the codec.

    This issue is happening on my main computer which has the following specs :

    • i7-4790 CPU @ 3.6GHz
    • RAM 16 GB
    • Intel HD Graphics 4600
    • Windows 8.1

    The same issue is not happening on my second computer which has the following specs :

    • i7 4510U @ 2GHz
    • RAM 8 GB
    • NVIDIA GeForce GTX 750Ti
    • Windows 10

    If I use DXVAChecker on my main computer, it says that my graphics card supports DXVA2 for H264_VLD_*, and I can see that the calls to the Microsoft API are being made (DXVA2_DecodeDeviceCreated, DXVA2_DecodeDeviceBeginFrame, DXVA2_DecodeDeviceGetBuffer, DXVA2_DecodeDeviceExecute, DXVA2_DecodeDeviceEndFrame) while my video is playing.

    I also don’t see any increase of GPU usage (on either computer) between the version with hardware decoding and the version without ; however, I do see a decrease in CPU usage (not as much as I was expecting though). This is also very strange.

    Note that I tried both the Windows release available on the FFmpeg website, and a version that I compiled with —enable-dxva2. I have searched a lot already but I was unable to find what I’m doing wrong.

    Hopefully, someone can help me, or maybe point me to a better example ?

  • Emscripten and Web Audio API

    29 avril 2015, par Multimedia Mike — HTML5

    Ha ! They said it couldn’t be done ! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

    Want to see it in action ? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

    But this endeavor was not without its challenges.

    Programmatically Generating Audio
    First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output :

    1. Sit in a loop and generate audio, writing it out via a blocking audio call
    2. Implement a callback that the audio system can invoke in order to generate more audio when needed

    Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

    I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

    The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated :

    Note : As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

    Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

    Vive la web standards !

    Visualize This
    The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

    Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that ? My application supports audio with 2 channels. I want 2 channels of data for visualization.

    How To Synchronize
    So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

    Pause/Resume
    The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

    Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as :

    • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
    • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
    • Firing sound effects in a gaming application
    • MIDI playback via JavaScript (this honestly amazes me)

    What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

    I implemented pause/resume in a sub-par manner : pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

    Future Work
    I have a lot more player libraries to port to this new system. But I think I have a good framework set up.