Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (9612)

  • Why don't VideoIntelligence end_time_offset values match for the same video ?

    3 septembre 2022, par ProGirlXOXO

    When parsing the results of Google's Video Intelligence API, I've noticed that speech_transcriptions and all other annotation results are split into two separate list items within annotation_results. See the example output below.

    


    Digging further, I've noticed they have slightly varied end_time_offset values.

    


      

    1. Why are these end_time_offset values different ? I expect them both to show the exact same value since the exact same video is being analyzed for both sets of features. In some cases this value is off by more than a second.
    2. 


    3. Assuming this is not an error, which end_time_offset I trust if I want to determine the total duration of the video ?
    4. 


    5. Why is feature output split into two different list items ?
    6. 


    


    {&#xA;    "annotation_results": [&#xA;        {&#xA;            "input_uri": "<redacted>.mp4",&#xA;            "segment": {&#xA;                "start_time_offset": {},&#xA;                "end_time_offset": {&#xA;                    "seconds": 57,&#xA;                    "nanos": 849516000&#xA;                }&#xA;            },&#xA;            "shot_label_annotations": [],&#xA;            "shot_annotations": [],&#xA;            "explicit_annotation": {},&#xA;            "text_annotations": [],&#xA;            "logo_recognition_annotations": []&#xA;        },&#xA;        {&#xA;            "input_uri": "<redacted>.mp4",&#xA;            "segment": {&#xA;                "start_time_offset": {},&#xA;                "end_time_offset": {&#xA;                    "seconds": 58,&#xA;                    "nanos": 69333000&#xA;                }&#xA;            },&#xA;            "speech_transcriptions": []&#xA;        }&#xA;    ]&#xA;}&#xA;</redacted></redacted>

    &#xA;

  • fftools/cmdutils : extend stream specifiers to match by disposition

    15 septembre 2024, par Anton Khirnov
    fftools/cmdutils : extend stream specifiers to match by disposition
    
    • [DH] Changelog
    • [DH] doc/fftools-common-opts.texi
    • [DH] fftools/cmdutils.c
    • [DH] fftools/cmdutils.h
    • [DH] tests/fate/ffmpeg.mak
    • [DH] tests/ref/fate/ffmpeg-spec-disposition
  • How to process video stream ?

    27 avril 2016, par sharpener

    I would like to ask some experienced multimedia professional how to proceed with following task :

    Given URL provides video stream and we would like to get access to decoded frames (byte stream in memory) in managed Win7+ application (C#). We don’t want to render/present the frames the standard way. The video format is known but not fixed (might get changed between two successive sessions, but we will know the parameters).

    So far, I have found there are several methods and I have build following picture in my mind :

    1. ffmpeg wrapper
      • Pros
        1. Self contained (no dependency to windows technologies)
        2. Powerful
      • Cons
        1. Little more complex to understand
        2. Lot of different wrapping variants (FFmpeg.NET, ffmpeg-sharp, ffmpeg-shard, FFmpeg.AutoGen, ...)
    2. DirectShow wrapper
      • Pros
        1. Widely used/supported technology (variaous filters freely available)
        2. Nice/detailed documentation on MSDN
      • Cons
        1. Quite old
        2. Considered obsolete from the point of author’s view (available only for desktop model on runtime >= Win8)
    3. MediaFoundation wrapper
      • Pros
        1. Theoretical successor of DirectShow, so should be available in the future
      • Cons
        1. Seems to be not as good as DirectShow
        2. Not very popular, limited "community" support
    4. FFmpegInterop wrapper
      • Pros
        1. Microsoft’s open source wrapper alternative
      • Cons
        1. Not available for runtime < Win8