Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (53)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (6622)

  • How can I quantitatively measure gstreamer H264 latency between source and display ?

    10 août 2018, par KevinM

    I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We’re using gstreamer on both the video source (camera) systems and the display monitors.

    We’re using RTP, payload 96, and libx264 to encode the video stream (no audio).

    But now I need to quantify the latency between (as close as possible to) frame acquisition and display.

    Does anyone have suggestions that use the existing software ?

    Ideally I’d like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can’t do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...

    I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).

    I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).

    In case it helps, the project applications are written in C++, so I can use C event callbacks, if they’re available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).

  • avformat/matroskaenc : Don't implicitly mark WebVTT in WebM as English

    18 janvier 2020, par Andreas Rheinhardt
    avformat/matroskaenc : Don't implicitly mark WebVTT in WebM as English
    

    Writing the language of WebVTT in WebM proceeded differently than the
    language of all other tracks : In case no language was given, it does not
    write anything instead of "und" (for undefined). Because the default
    value of the Language element in WebM (that inherited it from Matroska)
    is "eng" (for English), any such track will actually be flagged as
    English.

    Doing it this way goes back to commit 509642b4 (the commit adding
    support for WebVTT) and no reason for this has been given in the commit
    message or in the discussion about this patch on the mailing list ; the
    best I can think of is this : the WebM wiki contains "The srclang attribute
    is stored as the Language sub-element." Someone unfamiliar with default
    values in Matroska/WebM could interpret this as meaning that no Language
    element should be written if the language is unknown. And this is wrong
    and this commit changes it.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/matroskaenc.c
  • FFMPEG Streaming, using list for multiple presentations

    3 janvier 2021, par JJ The Second

    I am currently using a third party library to transcode videos from mp4 to HLS. https://video.aminyazdanpanah.com/python/start?r=hls#hls Great documentation and works fine however I have an issue by passing a list to hls.representations() that I think is something wrong I do. Here is how I run my code.

    &#xA;

    presetList = []&#xA;rep_1  = Representation(Size(1920,1080), Bitrate(4096 * 1024, 320 * 1024))&#xA;                    presetList.append(rep_1)&#xA;rep_2 = Representation(Size(1440, 900), Bitrate(2048 * 1024, 320 * 1024))&#xA;                    presetList.append(rep_2)&#xA;&#xA;video =  "file.mp4"&#xA;video = ffmpeg_streaming.input(video)&#xA;completed_destination = "completed.m3u8"&#xA;hls = video.hls(Formats.h264())&#xA;hls.representations(presetList)&#xA;hls.output(completed_destination)&#xA;

    &#xA;

    When I run this I get following error, that is triggered by library meaning values in my list not passed properly ?

    &#xA;

      File "/var/www/transcoder/transcoder/env/lib/python3.8/site-packages/ffmpeg_streaming/_hls_helper.py", line 87, in stream_info&#xA;    f&#x27;BANDWIDTH={rep.bitrate.calc_overall}&#x27;,&#xA;AttributeError: &#x27;list&#x27; object has no attribute &#x27;bitrate&#x27;&#xA;

    &#xA;

    if I instead run the same code with only change as below, works like a charm :

    &#xA;

    hls.representations(rep_1, rep_2)&#xA;

    &#xA;

    What am I doing wrong here ? Thanks

    &#xA;