Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8499)

  • Buffer overrun Blackmagic Intensity 4K as input to FFmpeg

    24 mai 2016, par colossus47

    I am trying to take direct video output from a 4k Sony Handycam, via HDMI directly into a Blackmagic Intensity Pro 4K. I can verify that the camera, Hdmi and blackmagic card are working as I can capture and view video using the provided "Media Express" program. When use ffmpeg I do get video output but I also get a buffer overrun.

    Here is the command :

    time ffmpeg -f decklink -i "Intensity Pro 4K@20" -c:v nvenc -b:v 100M -vf yadif=0:-1:0" -pix_fmt yuv420p -crf 29.97 -strict -2 output.mp4

    And I get the following output :

    ffmpeg version N-76538-gb83c849 Copyright (c) 2000-2015 the FFmpeg

    developers built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
    configuration: --enable-nonfree --enable-nvenc --enable-nvresize --extra-cflags=-I../cudautils --extra-ldflags=-L../cudautils --enable-gpl --enable-libx264 --enable-libx265 --enable-decklink --extra-cflags=-I/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include --extra-ldflags=-L/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include
    libavutil      55.  5.100 / 55.  5.100
    libavcodec     57. 15.100 / 57. 15.100
    libavformat    57. 14.100 / 57. 14.100
    libavdevice    57.  0.100 / 57.  0.100
    libavfilter     6. 15.100 /  6. 15.100
    libswscale      4.  0.100 /  4.  0.100
    libswresample   2.  0.101 /  2.  0.101
    libpostproc    54.  0.100 / 54.  0.100
    [decklink @ 0x1ccd6e0] Found Decklink mode 3840 x 2160 with rate 29.97
    [decklink @ 0x1ccd6e0] Stream #1: not enough frames to estimate rate; consider increasing probesize
    Guessed Channel Layout for  Input Stream #0.0 : stereo
    Input #0, decklink, from 'Intensity Pro 4K@20':
    Duration: N/A, start: 0.000000, bitrate: 1536 kb/s
    Stream #0:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
    Stream #0:1: Video: rawvideo (UYVY / 0x59565955), uyvy422, 3840x2160, -5 kb/s, 29.97 tbr, 1000k tbn, 29.97 tbc
    Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (output.mp4) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    Output #0, mp4, to 'output.mp4':
    Metadata:
    encoder         : Lavf57.14.100
    Stream #0:0: Video: h264 (nvenc) ([33][0][0][0] / 0x0021), yuv420p, 3840x2160, q=-1--1, 100000 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
    Metadata:
    encoder         : Lavc57.15.100 nvenc
    Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s
    Metadata:
    encoder         : Lavc57.15.100 aac
    Stream mapping:
    Stream #0:1 -> #0:0 (rawvideo (native) -> h264 (nvenc))
    Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
    Press [q] to stop, [?] for help
    [decklink @ 0x1ccd6e0] Decklink input buffer overrun!:03.15 bitrate=70411.7kbits/s  
    Last message repeated 1 times
    [decklink @ 0x1ccd6e0] Decklink input buffer overrun!:03.54 bitrate=73110.9kbits/s  
    Last message repeated 20 times
    [decklink @ 0x1ccd6e0] Decklink input buffer overrun!:03.92 bitrate=76270.2kbits/s  
    Last message repeated 15 times
    [decklink @ 0x1ccd6e0] Decklink input buffer overrun!:04.28 bitrate=78367.6kbits/s  
    Last message repeated 61 times
    frame=  140 fps= 22 q=-0.0 Lsize=   57266kB time=00:00:04.67 bitrate=100425.2kbits/s  
    video:57187kB audio:72kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.009844%
    [decklink @ 0x1ccd6e0] Decklink input buffer overrun!
    Last message repeated 7 times
    [aac @ 0x1cd7020] Qavg: 215.556

    real   0m8.808s
    user   0m5.785s
    sys   0m1.749s

    Some sort of insight into this, be that just some commands that may fix it the issue, or otherwise.

  • Issues when decoding video via ffmpeg with dxva2

    5 janvier 2016, par CD83

    I have successfully implemented a video player using ffmpeg. I am now trying to use hardware decoding but I’m facing a couple issues.
    I found a post that I followed as a starting point here : http://comments.gmane.org/gmane.comp.video.ffmpeg.libav.user/13523

    I have updated the code that setup the necessary stuff for the decoder. The updated code is available here : https://drive.google.com/file/d/0B5ufHdoDzA4ieVk5UVpxcDNzRHc/view?usp=sharing

    And this is how I’m using it to initialize the decoder :

    // Prepare the decoding context
    AVCodec *codec = nullptr;
    _codecContext = _avFormatContext->streams[_streamIndex]->codec;
    if ((codec = avcodec_find_decoder(_codecContext->codec_id)) == 0)
    {
       std::cout << "Unsupported video codec!" << std::endl;
       return false;
    }

    _codecContext->thread_count = 1;  // Multithreading is apparently not compatible with hardware decoding
    InputStream *ist = new InputStream();
    ist->hwaccel_id = HWACCEL_AUTO;
    ist->hwaccel_device = "dxva2";
    ist->dec = codec;
    ist->dec_ctx = _codecContext;
    _codecContext->coded_width = _width;
    _codecContext->coded_height = _height;

    _codecContext->opaque = ist;
    dxva2_init(_codecContext);

    _codecContext->get_buffer2 = ist->hwaccel_get_buffer;
    _codecContext->get_format = GetHwFormat;
    _codecContext->thread_safe_callbacks = 1;

    if (avcodec_open2(_codecContext, codec, nullptr) < 0)
    {
       std::cout << "Video codec open error" << std::endl;
       return false;
    }

    And here is the definition of GetHwFormat referenced above :

    AVPixelFormat GetHwFormat(AVCodecContext *s, const AVPixelFormat *pix_fmts)
    {
       InputStream* ist = (InputStream*)s->opaque;
       ist->active_hwaccel_id = HWACCEL_DXVA2;
       ist->hwaccel_pix_fmt = AV_PIX_FMT_DXVA2_VLD;
       return ist->hwaccel_pix_fmt;
    }

    When I open an mp4 (encoded in h264) video that is HD resolution or less, everything seems to be working fine. However, as soon as I try higher resolution videos like 3840x2160, I get the following errors repeatedly :

    Failed to execute: 0x80070057
    Hardware accelerator failed to decode picture

    I also start getting the following errors after a few seconds :

    co located POCs unavailable

    And the video is not displayed properly : I get a lot of artifacts all over the video and it is lagging. I checked the first error in the ffmpeg source code. It seems that IDirectXVideoDecoder_Execute fails because of an invalid parameter. Since this is happening withing ffmpeg, there must be something that I’m missing but I can’t figure out what. The only relevant post that I found with this error was because of multithreading but I set the thread_count to 1 before opening the codec.

    This issue is happening on my main computer which has the following specs :

    • i7-4790 CPU @ 3.6GHz
    • RAM 16 GB
    • Intel HD Graphics 4600
    • Windows 8.1

    The same issue is not happening on my second computer which has the following specs :

    • i7 4510U @ 2GHz
    • RAM 8 GB
    • NVIDIA GeForce GTX 750Ti
    • Windows 10

    If I use DXVAChecker on my main computer, it says that my graphics card supports DXVA2 for H264_VLD_*, and I can see that the calls to the Microsoft API are being made (DXVA2_DecodeDeviceCreated, DXVA2_DecodeDeviceBeginFrame, DXVA2_DecodeDeviceGetBuffer, DXVA2_DecodeDeviceExecute, DXVA2_DecodeDeviceEndFrame) while my video is playing.

    I also don’t see any increase of GPU usage (on either computer) between the version with hardware decoding and the version without ; however, I do see a decrease in CPU usage (not as much as I was expecting though). This is also very strange.

    Note that I tried both the Windows release available on the FFmpeg website, and a version that I compiled with —enable-dxva2. I have searched a lot already but I was unable to find what I’m doing wrong.

    Hopefully, someone can help me, or maybe point me to a better example ?

  • ffmpeg stream chrome kiosk mode ubuntu 16.04 server

    15 février 2021, par Raul

    I have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.

    



    Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s

    



    Current flow :

    



    1) start pulseaudio - we using something like this to start it :

    



    pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize


    



    2) start Xvfb

    



    Xvfb :0 -ac -screen 0 1920x1080x24


    



    3) start chrome linux in kiosk mode

    



    google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL


    



    4) start ffmpeg

    



    ffmpeg -y \
  -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
  -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
  -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
  -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
  -f flv YOUTUBE_LIVE_STREAMING_RTMP


    



    Note : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :

    



    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms


    



    At this point, here's what we observed :

    



      

    1. if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream

    2. 


    3. if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.

    4. 


    



    Questions :

    



      

    1. Why would ffmpeg have so much lag if it's started right after chrome ?
    2. 


    3. Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
    4. 


    5. Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
    6. 


    7. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
    8. 


    9. Can pulseaudio be the problem in this scenario ?
    10. 


    



    Thank you

    



    UPDATE Dec 20

    



    We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.

    



    So the new questions are :

    



      

    1. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
    2. 


    3. What could cause the initial audio/video out of sync issue and then catching up ?
    4.