Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (5332)

  • What is “interoperable TTML” ?

    1er janvier 2014, par silvia

    I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.

    TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.

    Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.

    EBU-TT

    The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).

    In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.

    The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.

    Simple Delivery Profile

    With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.

    Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.

    Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.

    Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.

    SMPTE-TT

    SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).

    However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.

    Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.

    Conclusion

    With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.

    Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?

    Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.

    UPDATE :

    I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.

  • While using skvideo.io.FFmpegReader and skvideo.io.FFmpegWriter for video throughput the input video and output video length differ

    28 juin 2024, par Kaesebrotus Anonymous

    I have an h264 encoded mp4 video of about 27.5 minutes length and I am trying to create a copy of the video which excludes the first 5 frames. I am using scikit-video and ffmpeg in python for this purpose. I do not have a GPU, so I am using libx264 codec for the output video.

    


    It generally works and the output video excludes the first 5 frames. Somehow, the output video results in a length of about 22 minutes. When visually checking the videos, the shorter video does seem slightly faster and I can identify the same frames at different timestamps. In windows explorer, when clicking properties and then details, both videos' frame rates show as 20.00 fps.

    


    So, my goal is to have both videos of the same length, except for the loss of the first 5 frames which should result in a duration difference of 0.25 seconds, and use the same (almost same) codec and not lose quality.

    


    Can anyone explain why this apparent loss of frames is happening ?

    


    Thank you for your interest in helping me, please find the details below.

    


    Here is a minimal example of what I have done.

    


    framerate = str(20)
reader = skvideo.io.FFmpegReader(inputvideo.mp4, inputdict={'-r': framerate})
writer = skvideo.io.FFmpegWriter(outputvideo.mp4, outputdict={'-vcodec': 'libx264', '-r': framerate})

for idx,frame in enumerate(reader.nextFrame()):
    if idx < 5:
        continue
    writer.writeFrame(frame)

reader.close()
writer.close()


    


    When I read the output video again using FFmpegReader and check the .probeInfo, I can see that the output video has less frames in total. I have also managed to replicate the same problem for shorter videos (now not excluding the first 5 frames, but only throughputting a video), e.g. 10 seconds input turns to 8 seconds output with less frames. I have also tried playing around with further parameters of the outputdict, e.g. -pix_fmt, -b. I have tried to set -time_base in the output dict to the same value as in the inputdict, but that did not seem to have the desired effect. I am not sure if the name of the parameter is right.

    


    For additional info, I am providing the .probeInfo of the input video, of which I used 10 seconds, and the .probeInfo of the 8 second output video it produced.

    


    **input video** .probeInfo:
input dict

{'video': OrderedDict([('@index', '0'),
              ('@codec_name', 'h264'),
              ('@codec_long_name',
               'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
              ('@profile', 'High 4:4:4 Predictive'),
              ('@codec_type', 'video'),
              ('@codec_tag_string', 'avc1'),
              ('@codec_tag', '0x31637661'),
              ('@width', '4096'),
              ('@height', '3000'),
              ('@coded_width', '4096'),
              ('@coded_height', '3000'),
              ('@closed_captions', '0'),
              ('@film_grain', '0'),
              ('@has_b_frames', '0'),
              ('@sample_aspect_ratio', '1:1'),
              ('@display_aspect_ratio', '512:375'),
              ('@pix_fmt', 'yuv420p'),
              ('@level', '60'),
              ('@chroma_location', 'left'),
              ('@field_order', 'progressive'),
              ('@refs', '1'),
              ('@is_avc', 'true'),
              ('@nal_length_size', '4'),
              ('@id', '0x1'),
              ('@r_frame_rate', '20/1'),
              ('@avg_frame_rate', '20/1'),
              ('@time_base', '1/1200000'),
              ('@start_pts', '0'),
              ('@start_time', '0.000000'),
              ('@duration_ts', '1984740000'),
              ('@duration', '1653.950000'),
              ('@bit_rate', '3788971'),
              ('@bits_per_raw_sample', '8'),
              ('@nb_frames', '33079'),
              ('@extradata_size', '43'),
              ('disposition',
               OrderedDict([('@default', '1'),
                            ('@dub', '0'),
                            ('@original', '0'),
                            ('@comment', '0'),
                            ('@lyrics', '0'),
                            ('@karaoke', '0'),
                            ('@forced', '0'),
                            ('@hearing_impaired', '0'),
                            ('@visual_impaired', '0'),
                            ('@clean_effects', '0'),
                            ('@attached_pic', '0'),
                            ('@timed_thumbnails', '0'),
                            ('@non_diegetic', '0'),
                            ('@captions', '0'),
                            ('@descriptions', '0'),
                            ('@metadata', '0'),
                            ('@dependent', '0'),
                            ('@still_image', '0')])),
              ('tags',
               OrderedDict([('tag',
                             [OrderedDict([('@key', 'language'),
                                           ('@value', 'und')]),
                              OrderedDict([('@key', 'handler_name'),
                                           ('@value', 'VideoHandler')]),
                              OrderedDict([('@key', 'vendor_id'),
                                           ('@value', '[0][0][0][0]')])])]))])}

**output video** .probeInfo:
{'video': OrderedDict([('@index', '0'),
              ('@codec_name', 'h264'),
              ('@codec_long_name',
               'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
              ('@profile', 'High'),
              ('@codec_type', 'video'),
              ('@codec_tag_string', 'avc1'),
              ('@codec_tag', '0x31637661'),
              ('@width', '4096'),
              ('@height', '3000'),
              ('@coded_width', '4096'),
              ('@coded_height', '3000'),
              ('@closed_captions', '0'),
              ('@film_grain', '0'),
              ('@has_b_frames', '2'),
              ('@pix_fmt', 'yuv420p'),
              ('@level', '60'),
              ('@chroma_location', 'left'),
              ('@field_order', 'progressive'),
              ('@refs', '1'),
              ('@is_avc', 'true'),
              ('@nal_length_size', '4'),
              ('@id', '0x1'),
              ('@r_frame_rate', '20/1'),
              ('@avg_frame_rate', '20/1'),
              ('@time_base', '1/10240'),
              ('@start_pts', '0'),
              ('@start_time', '0.000000'),
              ('@duration_ts', '82944'),
              ('@duration', '8.100000'),
              ('@bit_rate', '3444755'),
              ('@bits_per_raw_sample', '8'),
              ('@nb_frames', '162'),
              ('@extradata_size', '47'),
              ('disposition',
               OrderedDict([('@default', '1'),
                            ('@dub', '0'),
                            ('@original', '0'),
                            ('@comment', '0'),
                            ('@lyrics', '0'),
                            ('@karaoke', '0'),
                            ('@forced', '0'),
                            ('@hearing_impaired', '0'),
                            ('@visual_impaired', '0'),
                            ('@clean_effects', '0'),
                            ('@attached_pic', '0'),
                            ('@timed_thumbnails', '0'),
                            ('@non_diegetic', '0'),
                            ('@captions', '0'),
                            ('@descriptions', '0'),
                            ('@metadata', '0'),
                            ('@dependent', '0'),
                            ('@still_image', '0')])),
              ('tags',
               OrderedDict([('tag',
                             [OrderedDict([('@key', 'language'),
                                           ('@value', 'und')]),
                              OrderedDict([('@key', 'handler_name'),
                                           ('@value', 'VideoHandler')]),
                              OrderedDict([('@key', 'vendor_id'),
                                           ('@value', '[0][0][0][0]')]),
                              OrderedDict([('@key', 'encoder'),
                                           ('@value',
                                            'Lavc61.8.100 libx264')])])]))])}


    


    I used 10 seconds by adding this to the bottom of the loop shown above :

    


        if idx >= 200:
        break


    


  • Detect frames that have a given image/logo with FFmpeg

    15 juillet 2012, par sofia

    I'm trying to split a video by detecting the presence of a marker (an image) in the frames. I've gone over the documentation and I see removelogo but not detectlogo.

    Does anyone know how this could be achieved ? I know what the logo is and the region it will be on.

    I'm thinking I can extract all frames to png's and then analyse them one by one (or n by n) but it might be a lengthy process...

    Any pointers ?