Recherche avancée

Médias (91)

Autres articles (84)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (7114)

  • MPEG-TS Segments HTTP Live Streaming

    5 juin 2013, par user1069624

    I'm trying to interleave MPEG-TS segments but failing. One set of segments was actually captured using the built in camera in the laptop, then encoded using FFMPEG with the following command :

    ffmpeg -er 4 -y -f video4linux2 -s 640x480 -r 30 -i %s -isync -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480

    And the other one is an avi file that was encoded using the following command :

    fmpeg -er 4 -y -f avi -s 640x480 -r 30 -i ./DSCF2021.AVI -vbsf dump_extra -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480

    Then the output is segmented into ts segments using an open source segmenter.

    If both come from the same source (both from the camera) they work fine. However in this case, the second set of segments freeze. Time passes, but the video does not move..
    So i think it's an encoding problem. So my question is, how should i change the ffmpeg command for this to work ?

    By interleave I mean, having a playlist with the first set of segments, and another playlist with the other set of segments, and having the client call one then the other (HTTP Live Streaming)

    The ffprobe output of one of the first set of segments :

    Input #0, mpegts, from 'live1.ts':
     Duration: 00:00:09.76, start: 1.400000, bitrate: 281 kb/s
     Program 1 Service01
       Metadata:
         name            : Service01
         provider_name   : FFmpeg
       Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 29.92 fps, 29.92 tbr, 90k tbn, 59.83 tbc
       Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 111 kb/s

    The ffprobe output of one of the second set of segments :

    Input #0, mpegts, from 'ad1.ts':
     Duration: 00:00:09.64, start: 1.400000, bitrate: 578 kb/s
     Program 1 Service01
       Metadata:
         name            : Service01
         provider_name   : FFmpeg
       Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 22 kb/s

    Thank you,

  • Why doesn't this FFmpeg code create a video from a series of images ?

    20 octobre 2011, par user551117

    I have successfully compiled the FFmpeg library for use in an iOS application. I would like to use it for encoding a video from a series of images, but I can't seem to make it work.

    The following is the code that I am using to encode this video :

    AVCodec *codec;
    AVCodecContext *c= NULL;
    int i, out_size, size, outbuf_size;
    FILE *f;
    AVFrame *picture;
    uint8_t *outbuf;

    printf("Video encoding\n");

    /// find the mpeg video encoder
    codec=avcodec_find_encoder(CODEC_ID_MPEG4);
    //codec = avcodec_find_encoder(CODEC_ID_MPEG4);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c= avcodec_alloc_context();
    picture= avcodec_alloc_frame();

    // put sample parameters
    c->bit_rate = 400000;
    /// resolution must be a multiple of two
    c->width = 320;
    c->height = 480;
    //frames per second
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /// emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;

    //open it
    if (avcodec_open(c, codec) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    f = fopen([[NSTemporaryDirectory() stringByAppendingPathComponent:filename] UTF8String], "w");
    if (!f) {
       fprintf(stderr, "could not open %s\n",[filename UTF8String]);
       exit(1);
    }

    // alloc image and output buffer
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    #pragma mark -
    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  
    for(i=1;i<48;i++) {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];
       CGImageRef newCgImage = [image CGImage];

       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB24, c->width, c->height);
       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                      PIX_FMT_RGB24,
                                                      c->width, c->height,
                                                      PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR, NULL, NULL, NULL);

       //perform the conversion
       sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
       // Here is where I try to convert to YUV

       // encode the image

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
       printf("encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);

       free(buffer);
       buffer = NULL;      

    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, f);      
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    printf("\n");

    What could be wrong with this code ?

  • c++, FFMPEG, H264, creating zero-delay stream

    5 février 2015, par Mat

    I’m trying to encode video (using h264 codec at the moment, but other codecs would be fine too if better suited for my needs) such that the data needed for decoding is available directly after a frame (including the first frame) was encoded (so, i want only I and P frames, no B frames).

    How do I need to setup the AVCodecContext to get such a stream ? So far my testing arround with the values still always resulted in avcodec_encode_video() returning 0 on the first frame.

    //edit : this is currently my setup code of the AVCodecContext :

    static AVStream* add_video_stream(AVFormatContext *oc, enum CodecID codec_id, int w, int h, int fps)
    {
       AVCodecContext *c;
       AVStream *st;
       AVCodec *codec;

       /* find the video encoder */
       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }

       st = avformat_new_stream(oc, codec);
       if (!st) {
           fprintf(stderr, "Could not alloc stream\n");
           exit(1);
       }

       c = st->codec;

       /* Put sample parameters. */
       c->bit_rate = 400000;
       /* Resolution must be a multiple of two. */
       c->width    = w;
       c->height   = h;
       /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
       c->time_base.den = fps;
       c->time_base.num = 1;
       c->gop_size      = 12; /* emit one intra frame every twelve frames at most */

       c->codec = codec;
       c->codec_type = AVMEDIA_TYPE_VIDEO;
       c->coder_type = FF_CODER_TYPE_VLC;
       c->me_method = 7; //motion estimation algorithm
       c->me_subpel_quality = 4;
       c->delay = 0;
       c->max_b_frames = 0;
       c->thread_count = 1; // more than one threads seem to increase delay
       c->refs = 3;

       c->pix_fmt       = PIX_FMT_YUV420P;

       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return st;
    }

    but with this avcodec_encode_video() will buffer 13 frames before returning any bytes (after that, it will return bytes on every frame). if I set gop_size to 0, then avcodec_encode_video() will return bytes only after the second frame was passed to it. I need a zero delay though.

    This guy apparently was successful (even with larger gop) : http://mailman.videolan.org/pipermail/x264-devel/2009-May/005880.html but I don’t see what he is doing differently