Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (86)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (11995)

  • Segmentation fault with avcodec_encode_video2() while encoding H.264

    16 juillet 2015, par Baris Demiray

    I’m trying to convert a cv::Mat to an AVFrame to encode it then in H.264 and wanted to start from a simple example, as I’m a newbie in both. So I first read in a JPEG file, and then do the pixel format conversion with sws_scale() from AV_PIX_FMT_BGR24 to AV_PIX_FMT_YUV420P keeping the dimensions the same, and it all goes fine until I call avcodec_encode_video2().

    I read quite a few discussions regarding an AVFrame allocation and the question segmetation fault while avcodec_encode_video2 seemed like a match but I just can’t see what I’m missing or getting wrong.

    Here is the minimal code that you can reproduce the crash, it should be compiled with,

    g++ -o OpenCV2FFmpeg OpenCV2FFmpeg.cpp -lopencv_imgproc -lopencv_highgui -lopencv_core -lswscale -lavutil -lavcodec -lavformat

    It’s output on my system,

    cv::Mat [width=420, height=315, depth=0, channels=3, step=1260]
    I'll soon crash..
    Segmentation fault

    And that sample.jpg file’s details by identify tool,

    ~temporary/sample.jpg JPEG 420x315 420x315+0+0 8-bit sRGB 38.3KB 0.000u 0:00.000

    Please note that I’m trying to create a video out of a single image, just to keep things simple.

    #include <iostream>
    #include <cassert>
    using namespace std;

    extern "C" {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libavformat></libavformat>avformat.h>
    }

    #include <opencv2></opencv2>core/core.hpp>
    #include <opencv2></opencv2>highgui/highgui.hpp>

    const string TEST_IMAGE = "/home/baris/temporary/sample.jpg";

    int main(int /*argc*/, char** argv)
    {
       av_register_all();
       avcodec_register_all();

       /**
        * Initialise the encoder
        */
       AVCodec *h264encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
       AVFormatContext *cv2avFormatContext = avformat_alloc_context();

       /**
        * Create a stream and allocate frames
        */
       AVStream *h264outputstream = avformat_new_stream(cv2avFormatContext, h264encoder);
       avcodec_get_context_defaults3(h264outputstream->codec, h264encoder);
       AVFrame *sourceAvFrame = av_frame_alloc(), *destAvFrame = av_frame_alloc();
       int got_frame;

       /**
        * Pixel formats for the input and the output
        */
       AVPixelFormat sourcePixelFormat = AV_PIX_FMT_BGR24;
       AVPixelFormat destPixelFormat = AV_PIX_FMT_YUV420P;

       /**
        * Create cv::Mat
        */
       cv::Mat cvFrame = cv::imread(TEST_IMAGE, CV_LOAD_IMAGE_COLOR);
       int width = cvFrame.size().width, height = cvFrame.size().height;
       cerr &lt;&lt; "cv::Mat [width=" &lt;&lt; width &lt;&lt; ", height=" &lt;&lt; height &lt;&lt; ", depth=" &lt;&lt; cvFrame.depth() &lt;&lt; ", channels=" &lt;&lt; cvFrame.channels() &lt;&lt; ", step=" &lt;&lt; cvFrame.step &lt;&lt; "]" &lt;&lt; endl;

       h264outputstream->codec->pix_fmt = destPixelFormat;
       h264outputstream->codec->width = cvFrame.cols;
       h264outputstream->codec->height = cvFrame.rows;

       /**
        * Prepare the conversion context
        */
       SwsContext *bgr2yuvcontext = sws_getContext(width, height,
                                                   sourcePixelFormat,
                                                   h264outputstream->codec->width, h264outputstream->codec->height,
                                                   h264outputstream->codec->pix_fmt,
                                                   SWS_BICUBIC, NULL, NULL, NULL);

       /**
        * Convert and encode frames
        */
       for (uint i=0; i &lt; 250; i++)
       {
           /**
            * Allocate source frame, i.e. input to sws_scale()
            */
           avpicture_alloc((AVPicture*)sourceAvFrame, sourcePixelFormat, width, height);

           for (int h = 0; h &lt; height; h++)
               memcpy(&amp;(sourceAvFrame->data[0][h*sourceAvFrame->linesize[0]]), &amp;(cvFrame.data[h*cvFrame.step]), width*3);

           /**
            * Allocate destination frame, i.e. output from sws_scale()
            */
           avpicture_alloc((AVPicture *)destAvFrame, destPixelFormat, width, height);

           sws_scale(bgr2yuvcontext, sourceAvFrame->data, sourceAvFrame->linesize,
                     0, height, destAvFrame->data, destAvFrame->linesize);

           /**
            * Prepare an AVPacket for encoded output
            */
           AVPacket avEncodedPacket;
           av_init_packet(&amp;avEncodedPacket);
           avEncodedPacket.data = NULL;
           avEncodedPacket.size = 0;
           // av_free_packet(&amp;avEncodedPacket); w/ or w/o result doesn't change

           cerr &lt;&lt; "I'll soon crash.." &lt;&lt; endl;
           if (avcodec_encode_video2(h264outputstream->codec, &amp;avEncodedPacket, destAvFrame, &amp;got_frame) &lt; 0)
               exit(1);

           cerr &lt;&lt; "Checking if we have a frame" &lt;&lt; endl;
           if (got_frame)
               av_write_frame(cv2avFormatContext, &amp;avEncodedPacket);

           av_free_packet(&amp;avEncodedPacket);
           av_frame_free(&amp;sourceAvFrame);
           av_frame_free(&amp;destAvFrame);
       }
    }
    </cassert></iostream>

    Thanks in advance !

    EDIT : And the stack trace after the crash,

    Thread 2 (Thread 0x7fffe5506700 (LWP 10005)):
    #0  0x00007ffff4bf6c5d in poll () at /lib64/libc.so.6
    #1  0x00007fffe9073268 in  () at /usr/lib64/libusb-1.0.so.0
    #2  0x00007ffff47010a4 in start_thread () at /lib64/libpthread.so.0
    #3  0x00007ffff4bff08d in clone () at /lib64/libc.so.6

    Thread 1 (Thread 0x7ffff7f869c0 (LWP 10001)):
    #0  0x00007ffff5ecc7dc in avcodec_encode_video2 () at /usr/lib64/libavcodec.so.56
    #1  0x00000000004019b6 in main(int, char**) (argv=0x7fffffffd3d8) at ../src/OpenCV2FFmpeg.cpp:99

    EDIT2 : Problem was that I hadn’t avcodec_open2() the codec as spotted by Ronald. Final version of the code is at https://github.com/barisdemiray/opencv2ffmpeg/, with leaks and probably other problems hoping that I’ll improve it while learning both libraries.

  • What is “interoperable TTML” ?

    1er janvier 2014, par silvia

    I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.

    TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.

    Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.

    EBU-TT

    The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).

    In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.

    The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.

    Simple Delivery Profile

    With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.

    Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.

    Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.

    Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.

    SMPTE-TT

    SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).

    However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.

    Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.

    Conclusion

    With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.

    Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?

    Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.

    UPDATE :

    I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.

  • What is “interoperable TTML” ?

    19 septembre 2012, par silvia

    I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.

    TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.

    Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.

    EBU-TT

    The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).

    In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.

    The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.

    Simple Delivery Profile

    With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.

    Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.

    Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.

    Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.

    SMPTE-TT

    SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).

    However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.

    Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.

    Conclusion

    With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.

    Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?

    Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.

    UPDATE :

    I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.