Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (56)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (11572)

  • libavcodec/dnxhd : Enable 12-bit DNxHR support.

    2 août 2016, par Steven Robertson
    libavcodec/dnxhd : Enable 12-bit DNxHR support.
    

    10- and 12-bit DNxHR use the same DC coefficient decoding process and
    VLC table, just with a different shift value. From SMPTE 2019-1:2016,
    8.2.4 DC Coefficient Decoding :

    "For 8-bit video sampling, the maximum value of η=11 and for
    10-/12-bit video sampling, the maximum value of η=13."

    A sample file will be uploaded to show that with this patch, things
    decode correctly :
    dnxhr_hqx_12bit_1080p_smpte_colorbars_davinci_resolve.mov

    Signed-off-by : Steven Robertson <steven@strobe.cc>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/dnxhddec.c
  • How to improve Desktop capture performance and quality with ffmpeg [closed]

    6 novembre 2024, par Francesco Bramato

    I'm developing a game capture feature from my Electron app. I'm working on this since a while and tried a lot of different parameters combinations, now i'm running out of ideas :)

    &#xA;

    I've read tons of ffmpeg documentation, SO posts, other sites, but i'm not really a ffmpeg expert or video editing pro.

    &#xA;

    This is how it works now :

    &#xA;

    The app spawn an ffmpeg command based on user's settings :

    &#xA;

      &#xA;
    • Output format (mp4, mkv, avi)
    • &#xA;

    • Framerate (12, 24, 30, 60)
    • &#xA;

    • Codec (X264, NVidia NVENC, AMD AMF)
    • &#xA;

    • Bitrate (from 1000 to 10000kpbs)
    • &#xA;

    • Presets (for X264)
    • &#xA;

    • Audio output (a dshow device like StereoMix or VB-Cable) and Audio input (a dshow device like the Microphone)
    • &#xA;

    • Final Resolution (720p, 1080p, 2K, Original Size)
    • &#xA;

    &#xA;

    The command executed, as far, is :

    &#xA;

    ffmpeg.exe -nostats -hide_banner -hwaccel cuda -hwaccel_output_format cuda -f gdigrab -draw_mouse 0 -framerate 60 -offset_x 0 -offset_y 0 -video_size 2560x1440 -i desktop -f dshow -i audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{D61FA53D-FA37-4BE7-BE2F-4005F94790BB} -ar 44100 -colorspace bt709 -color_trc bt709 -color_primaries bt709 -c:v h264_nvenc -b:v 6000k -preset slow -rc cbr -profile:v high -g 60 -acodec aac -maxrate 6000k -bufsize 12000k -pix_fmt yuv420p -f mpegts -&#xA;

    &#xA;

    one of the settings is the recording mode : full game session or replay buffer.&#xA;In case of full game session, the output is a file, for replay buffer is stdout.

    &#xA;

    The output format is mpegts because, as far i have read in a lot of places, the video stream can be cut in any moment.

    &#xA;

    Replays are cutted with different past and future duration based on game events.

    &#xA;

    In full game session, the replays are cutted directly from the mpegts.

    &#xA;

    In replay buffer mode, the ffmpeg stdout is redirect to the app that record the buffer (1 or 2 minutes), when the replay must be created, the app saves on the disk the buffer section according to past and future duration and with another ffmpeg command, copy it to a mp4 or mkv final file.

    &#xA;

    Generally speaking, this works reliably.

    &#xA;

    There are few issues :

    &#xA;

      &#xA;
    • nonetheless i ask ffmpeg to capture at 60fps, the final result is at 30fps (using -r 60 will speed up the final result)
    • &#xA;

    • some user has reported FPS drops in-game, specially when using NVidia NVENC (and having a NVIDIA GPU), using X264 seems save some FPS
    • &#xA;

    • colors are strange compared to original, what i see on screen, they seem washed out - i could have solved this using -colorspace bt709 -color_trc bt709 -color_primaries bt709 but don't know if is the right choice
    • &#xA;

    • NVIDIA NVenc with any other preset that is not slow creates videos terribly laggy
    • &#xA;

    &#xA;

    here two examples, 60 FPS, NVIDIA NVENC (slow, 6000kbs, MP4

    &#xA;

    Recorded by my app : https://www.youtube.com/watch?v=Msm62IwHdlk

    &#xA;

    Recorded by OB with nearly same settings : https://youtu.be/WuHoLh26W7E

    &#xA;

    Hope someone can help me

    &#xA;

    Thanks !

    &#xA;

  • How can I upscale a stereo signal using PLII on a VM

    27 mars 2024, par andersmi

    I want to upscale a stereo signal with PLII from an input on a VM and sent it to an output after the upscale.

    &#xA;

    I am thinking of installing Voicemeeter/Virtual Audio Cable or something like that to get an input on the VM.&#xA;I will then be able to use Dante Via on the VM host to sent the audio to the VM input and receive it again from the VM output on Dante Via then send it to my amplifier.&#xA;The solution needs to be able to initialize itself after reboot.

    &#xA;

    I have looked into different solutions the most promissing seems to be FFDShow and FFMpeg, but there is not much information about how to do this. I don't care what is used just as long as it support PLII.

    &#xA;

    I am looking for information on how to use FFDShow/FFMpeg with an application I develop or any other way to solve this.

    &#xA;

    Thanks

    &#xA;