Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (11588)

  • avcodec/refstruct : Add simple API for refcounted objects

    4 août 2022, par Andreas Rheinhardt
    avcodec/refstruct : Add simple API for refcounted objects
    

    For now, this API is supposed to replace all the internal uses
    of reference counted objects in libavcodec ; "internal" here
    means that the object is created in libavcodec and is never
    put directly in the hands of anyone outside of it.

    It is intended to be made public eventually, but for now
    I enjoy the ability to modify it freely.

    Several shortcomings of the AVBuffer API motivated this API :
    a) The unnecessary allocations (and ensuing error checks)
    when using the API. Besides the need for runtime checks it
    imposes upon the developer the burden of thinking through
    what happens in case an error happens. Furthermore, these
    error paths are typically not covered by FATE.
    b) The AVBuffer API is designed with buffers and not with
    objects in mind : The type for the actual buffers used
    is uint8_t* ; it pretends to be able to make buffers
    writable, but this is wrong in case the buffer is not a POD.
    Another instance of this thinking is the lack of a reset
    callback in the AVBufferPool API.
    c) The AVBuffer API incurs unnecessary indirections by
    going through the AVBufferRef.data pointer. In case the user
    tries to avoid this indirection and stores a pointer to
    AVBuffer.data separately (which also allows to use the correct
    type), the user has to keep these two pointers in sync
    in case they can change (and in any case has two pointers
    occupying space in the containing context). See the following
    commit using this API for H.264 parameter sets for an example
    of the removal of such syncing code as well as the casts
    involved in the parts where only the AVBufferRef* pointer
    was stored.
    d) Given that the AVBuffer API allows custom allocators,
    creating refcounted objects with dedicated free functions
    often involves a lot of boilerplate like this :
    obj = av_mallocz(sizeof(*obj)) ;
    ref = av_buffer_create((uint8_t*)obj, sizeof(*obj), free_func, opaque, 0) ;
    if (!ref)
    av_free(obj) ;
    return AVERROR(ENOMEM) ;

    (There is also a corresponding av_free() at the end of free_func().)
    This is now just
    obj = ff_refstruct_alloc_ext(sizeof(*obj), 0, opaque, free_func) ;
    if (!obj)
    return AVERROR(ENOMEM) ;
    See the subsequent patch for the framepool (i.e. get_buffer.c)
    for an example.

    This API does things differently ; it is designed to be lightweight*
    as well as geared to the common case where the allocator of the
    underlying object does not matter as long as it is big enough and
    suitably aligned. This allows to allocate the user data together
    with the API's bookkeeping data which avoids an allocation as well
    as the need for separate pointers to the user data and the API's
    bookkeeping data. This entails that the actual allocation of the
    object is performed by RefStruct, not the user. This is responsible
    for avoiding the boilerplate code mentioned in d).

    As a downside, custom allocators are not supported, but it will
    become apparent in subsequent commits that there are enough
    usecases to make it worthwhile.

    Another advantage of this API is that one only needs to include
    the relevant header if one uses the API and not when one includes
    the header or some other component that uses it. This is because there
    is no RefStruct type analog of AVBufferRef. This brings with it
    one further downside : It is not apparent from the pointer itself
    whether the underlying object is managed by the RefStruct API
    or whether this pointer is a reference to it (or merely a pointer
    to it).

    Finally, this API supports const-qualified opaque pointees ;
    this will allow to avoid casting const away by the CBS code.

    * : Basically the only exception to the you-only-pay-for-what-you-use
    rule is that it always uses atomics for the refcount.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/Makefile
    • [DH] libavcodec/refstruct.c
    • [DH] libavcodec/refstruct.h
  • How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?

    29 janvier 2016, par ksb496

    I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.

    The summarized pseudocode of the main part of the program is the following one :

    std::vector<int> B(width*height*3);
    for (i=0; i/ void generateframe(std::vector<int> &amp;, int)
     generateframe(B, i); // Returns different images for different i values.
     sprintf(s, "IMG_%d.png", i+1);
     WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
    }
    </int></int></int>

    The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.

    In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).

    The pseudocode of what I would like to obtain is something similar to this :

    std::vector<int> B(width*height*3);
    video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
    for (i=0; icode></int>

    According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).

    Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.

    Edit 1 :

    As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :

    FILE *fd;
    mkfifo("myfifo", 0666);

    for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&amp;fd)
     fflush(fd);
     fd=fclose("myfifo");
    }
    unlink("myfifo");
    </int>

    WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.

    Then I compile and write the following command in the Cygwin terminal :

    ./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4

    However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?

  • How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?

    29 janvier 2016, par ksb496

    I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.

    The summarized pseudocode of the main part of the program is the following one :

    std::vector<int> B(width*height*3);
    for (i=0; i/ void generateframe(std::vector<int> &amp;, int)
     generateframe(B, i); // Returns different images for different i values.
     sprintf(s, "IMG_%d.png", i+1);
     WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
    }
    </int></int></int>

    The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.

    In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).

    The pseudocode of what I would like to obtain is something similar to this :

    std::vector<int> B(width*height*3);
    video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
    for (i=0; icode></int>

    According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).

    Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.

    Edit 1 :

    As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :

    FILE *fd;
    mkfifo("myfifo", 0666);

    for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&amp;fd)
     fflush(fd);
     fd=fclose("myfifo");
    }
    unlink("myfifo");
    </int>

    WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.

    Then I compile and write the following command in the Cygwin terminal :

    ./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4

    However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?