Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (4920)

  • mp4 to hls ffmpeg during upload to Google storage

    13 février 2018, par Sofiia Vynnytska

    I have an API written on Java, which is used for uploading mp4 videos on front-end. We store those videos on google cloud and our application is running in google cloud. For Android clients, we need to convert uploaded videos to hls. Unfortunately, Google cloud does not have transcoder for videos and I need to convert video in another way. I found that ffmpeg can do this. But I cannot find a good solution for this and I need an advice.
    One of ideas is to deploy cloud function on google cloud, which will convert mp4 to hls after uploading video. Can I need to upload this m3u8 and ts videos to storage ? Will be this approach okay ? Is there any other possible solution ?

  • FFMPEG : Displaying a white screen using ffplay via a custom decoder

    11 décembre 2013, par Zax

    I have created a dummy decoder, Here in its decode function, i would be assigning the output file pointer with a YUV420 data filled with 255 (i.e. a white screen).

    I also have a corresponding probe function for my dummy decoder, where it takes an dummy input file and based on some checking i return AVPROBE_SCORE_MAX. This probe section works perfectly fine and invokes my custom dummy decoder.

    The AVCodec structure of for my dummy decoder is as shown below :

    AVCodec ff_dummyDec_decoder = {
       .name           = "dummyDec",
       .type           = AVMEDIA_TYPE_VIDEO,
       .id             = AV_CODEC_ID_MYDEC,
       .priv_data_size = sizeof(MYDECContext),
       .pix_fmts       = (const enum AVPixelFormat[]) {AV_PIX_FMT_YUV420P},
       .init           = dummyDec_decode_init,
       .close          = dummyDec_decode_close,
       .decode         = dummyDec_decode_frame,
    };

    Where,

    .init -> is a pointer to a function that performs my decoder related initializations
    .close -> is a pointer to a function that frees all memory that was allocated during initialization
    .decode -> is pointer to a function that decodes a frame.

    The definitions for the above functions is shown below :

    #include
    #include
    #include "avcodec.h"

    unsigned char *yPtr=NULL;
    unsigned char *uPtr=NULL;
    unsigned char *vPtr=NULL;

    int memFlag=0;//If memFlag is zero then allocate memory for YUV data

    int width=416;//Picture width and height that i want to display in ffplay
    int height=240;

    static int dummyDec_decode_frame(AVCodecContext *avctx, void *data,
                                int *got_frame_ptr, AVPacket *avpkt)
    {
       AVFrame *frame=data; //make frame point to the pointer on which output should be mapped
       printf("\nDecode function entered\n");
       frame->width=width;
       frame->height=height;
       frame->format=AV_PIX_FMT_YUV420P;

       //initialize frame->linesize[] array
       avpicture_fill((AVPicture*)frame, NULL, frame->format,frame->width,frame->height);

       frame->data[0]=yPtr;
       frame->data[1]=uPtr;
       frame->data[2]=vPtr;

       *got_frame_ptr = 1;

       printf("\nGotFramePtr set to 1\n");

       return width*height+(width/2)*(height/2)+(width/2)*(height/2);//returning the amount of bytes being used
    }

    static int dummyDec_decode_init(AVCodecContext *avctx)
    {
       printf("\nDummy Decoders init entered\n");

       //Allocate memory for YUV data
       yPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width*height);
       uPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width/2*height/2);
       vPtr=(unsigned char*)malloc(sizeof(unsigned char*)*width/2*height/2);

       if(yPtr == NULL || uPtr ==NULL ||vPtr==NULL)
           exit(0);

       //set allocated memory with 255 i.e white color
      memset(yPtr,255,width*height);
      memset(uPtr,255,width/2*height/2);
      memset(vPtr,255,width/2*height/2);
    }

    static int dummyDec_decode_close(AVCodecContext *avctx)
    {
       free(yPtr);
       free(uPtr);
       free(vPtr);
    }

    From the dummyDec_decode_frame() function, i'm returning the number of bytes that are being used to display the white colour. Is this right ? Secondly, I have no parser for my decoder, because i'm just mapping a yuv buffer containing white data to AVFrame structure pointer.

    The command that i use for executing is :

    ./ffplay -vcodec dummyDec -i input.bin

    The output is an infinite loop with the following messages :

    dummyDec probe entered

    Dummy Decoders init entered

    Decode function entered

    GotFramePtr set to 1

    Decode function entered

    GotFramePtr set to 1

    Decode function entered

    GotFramePtr set to 1

    .

    .

    .(the last two messages keep repeating)

    Where is it i'm going wrong ? is it the absence of parser or something else ? I'm unable to proceed because of this. Please provide your valuable answers. Thanks in advance.

    —Regards

  • Playing a custom avi data stream using QtMultimedia

    26 décembre 2015, par sbabbi

    I need to play back a custom AVI file that contains a classic video stream, an audio stream but also a custom data stream.

    The custom stream contains data that is visualized by some custom widgets ; those widgets only need that each custom frame is written in a buffer at the proper time.

    Our application is based on Qt, and already uses QMediaPlayer/QVideoWidget to play traditional videos, but the additional custom stream makes things more complicated, because AFAIK QMediaPlayer only plays the video/audio and ignores everything else.

    I would like to avoid to reinvent the entire qt-multimedia, but I am not sure how to make the best out of the available Qt classes.


    My ideas so far are :

    1. Write a custom media player class that demuxes and decodes the video using ffmpeg, implements the timing, uses QAudioOutput for playing the audio, produces a stream of QVideoFrames to be played on the video and write the custom data to some buffer for visualization.

      The problem : In order to avoid writing the code to rescale/convert the video frames, I would like to reuse QVideoWidget, but it seems to work only with the "real" QMediaPlayer.

    2. Demux the input file and feed QMediaPlayer with the AV streams.
      Demux the input with ffmpeg (possibly leaving the decoding to the Qt backend), have one QIODevice to retrieve only the video/audio streams from the input file and another one to retrieve the data stream. Play the video/audio with QMediaPlayer.

                   +-------+                          
                   | QFile |                          
                   +---^---+                          
                       |                              
                    inherits                          
                       |                              
             +--------------------+
             |    MyAviDemuxer    |
             |                    |
             |  holds a queue of  |
             |  demuxed packets   |
             +--------------------+
             |                    |
       readDataPacket      readVideoPacket
             |                    |
      +-------v--------+  +--------v-----------+            +-----------+
      | MyCustomReader |  | MyVideoAudioStream +--inherits--> QIODevice |
      +----------------+  +--------+-----------+            +-----------+
                                  |      
                               setMedia                  
                                  |                  
                          +-------v-------+          
                          | QMediaPlayer  |          
                          +---------------+          

      The problem : synchronize the timing of the data stream with QMediaPlayer, handle headers and metadata correctly.


    I am slightly inclined to option 1, just because it gives me more control, but I am wondering if I missed an easier solution (even Windows-only).