Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (56)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (9748)

  • avcodec/hevc_filter : Pass HEVCLocalContext when slice-threading

    29 juin 2022, par Andreas Rheinhardt
    avcodec/hevc_filter : Pass HEVCLocalContext when slice-threading
    

    The HEVC decoder has both HEVCContext and HEVCLocalContext
    structures. The latter is supposed to be the structure
    containing the per-slicethread state.

    Yet that is not how it is handled in practice : Each HEVCLocalContext
    has a unique HEVCContext allocated for it and each of these
    coincides with the main HEVCContext except in exactly one field :
    The corresponding HEVCLocalContext.
    This makes it possible to pass the HEVCContext everywhere where
    logically a HEVCLocalContext should be used.

    This commit stops doing this for lavc/hevc_filter.c ; it also constifies
    everything that is possible in order to ensure that no slice thread
    accidentally modifies the main HEVCContext state.

    There are places where this was not possible, namely with the SAOParams
    in sao_filter_CTB() or with sao_pixels_buffer_h in copy_CTB_to_hv().
    Both of these instances lead to data races, see
    https://fate.ffmpeg.org/report.cgi?time=20220629145651&slot=x86_64-archlinux-gcc-tsan-slices

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/hevc_filter.c
    • [DH] libavcodec/hevcdec.c
    • [DH] libavcodec/hevcdec.h
  • Coordinates with file format .ASS

    29 mai 2024, par Alex

    I'm grabbing my text coordinates that I want to use for the file format .ASS and for centering on a 1080 by 1920 video, I have an application that displays where the text is on the video and can retrieve the position, an example of centered position is 310 by 800. When I try to set the position of the text inside the .ASS with those positions it doesn't write the caption where is meant to be. Could some explain how a .ASS positioning works. If use 200 px by 200 px as positioning it places it way further than the center even though the video its a 1080x1920 video, shouldn't it be placed before the center position of the video ?

    &#xA;

    This is how my .ASS file looks like, I'm using ffmpeg to write the subtitles into the video :

    &#xA;

    [Script Info]&#xA;Title: Video Subtitles&#xA;ScriptType: v4.00&#x2B;&#xA;Collisions: Normal&#xA;PlayDepth: 0&#xA;PlayResX: 1080&#xA;PlayResY: 1920&#xA;&#xA;&#xA;[V4&#x2B; Styles]&#xA;Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BorderStyle, Encoding&#xA;Style: Default, Segoe UI,9,&amp;H00FFFFFF,&amp;HFFFF00,&amp;H00FFFFFF,0,0&#xA;Style: Background, Segoe UI,9,&amp;H00FFFFFF,&amp;H000000FF,&amp;H00000000,3,0&#xA;&#xA;[Events]&#xA;Format: Start, End, Style, MarginL, MarginR, MarginV, Text&#xA;Dialogue: 0:00:00.00,0:00:05.00,Default,0,0,0,{\pos(275,876)} {\bord5\3c&amp;H000000&amp;\fs90}LISTEN IN {\r}&#xA;&#xA;

    &#xA;

      &#xA;
    • Edit : I've added the playresx and y, but the text is still not where its meant to be. To add a bit more context, I have a scene in PyQT that uses a coordinates system to place text over videos, for this text its saying that its at the x axis of 275 and y axis of 876. When I use those coordinates for the .ASS text it doesn't show the same position. Also the scene in which the video and the text is in, its 1080 by 1920, this images show what I want and what I'm getting :
    • &#xA;

    &#xA;

    What I'm trying to achieve

    &#xA;

    what I'm getting

    &#xA;

  • ffmpeg buffer not released

    11 mars 2013, par ByteByter

    So, I have written a basic decoder for ffmpeg that simply reads the input frames pixel data (Stored using RGB 8 format), and places it directly into the output buffer. (Also RGB 8) The problem is that when I use this decoder in ffmpeg, it says that there is 1 unreleased buffer.(Tested using ffmpeg -i Test.utah Test.png). Unfortunately, I am unsure of what buffer it is talking about, as I am not creating my own buffer. I have tried releasing the AVContext's coded_frame buffer in my decoders closing method, but this causes segmentation faults.

    Any help would be greatly appreciated.

    static int decode_frame(AVCodecContext *avctx, void *data, int *got_frame, AVPacket *avpkt)
    {
       int ret;           /*Hold return from get_buffer */
       int skipSize;      /*Number of dummy bytes to skip per line*/
       int fseek=8;       /*Location of the first pixel*/
       int i=0; int j=0;  /*Output buffer seek index/Input Buffer seek index*/
       const uint8_t *buf = avpkt->data; /*Hold a pointer to the input buffer*/
       AVFrame *pict=data; /*Hold a pointer to the output buffer*/

       /*Set the output pixel format to RGB8*/
       avctx->pix_fmt = AV_PIX_FMT_RGB8;

       /*Get the width and height*/
       bytestream_get_le32(&amp;buf);
       avctx->width=bytestream_get_le16(&amp;buf);
       avctx->height=bytestream_get_le16(&amp;buf);

       /*Release the old buffer*/
       if (pict->data[0]) avctx->release_buffer(avctx, pict);

       /*Aquire a large enough data buffer to hold the decompressed picture*/
       if (ret=ff_get_buffer(avctx, pict) &lt; 0) return ret;
       skipSize=pict->linesize[0] - avctx->width;

       /*Transfer input buffer to output buffer*/
       for(int y=0;yheight;y++){
           for(int x=0;xwidth;x++){
               pict->data[0][i]=avpkt->data[fseek+j];
               j++;
               i++;
           }
           i+=skipSize;
       }

       /*Inform ffmpeg the output is a key frame and that it is ready for external usage*/
       pict->pict_type        = AV_PICTURE_TYPE_I;
       pict->key_frame        = 1;
       *got_frame=1;
       return 0;
    }