Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (65)

  • Les images

    15 mai 2013
  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (8629)

  • How to encode with ffmpeg PCM to AAC with different sample formats ?

    1er août 2014, par user2212461

    I’m using the following code to encode PCM to AAC using libav. It works with sample_fmt = AV_SAMPLE_FMT_S16 ; and a newer release of liabv. In older versions, only sample_fmt = AV_SAMPLE_FMT_FLT is allowed, but then the decoder always returns 0 (nothing decoded). What in the code do I need to adapt in order to make it work with sample_fmt = AV_SAMPLE_FMT_S16 ? Is sample_fmt the input or output format ?

    avcodec_register_all();
    codec = avcodec_find_encoder(CODEC_ID_AAC);
    c = avcodec_alloc_context3(codec);

    c->bit_rate   = 64000;
    c->sample_rate= 44100;
    c->channels   = 1;
    c->frame_size = 86000;
    c->sample_fmt = AV_SAMPLE_FMT_FLT;//---> this works with AV_SAMPLE_FMT_S16

    buf = (uint8_t *)malloc(bufSize);
    audioData = (uint8_t *)malloc(size);

    //fill audioData

    int packetSize = avcodec_encode_audio(c, buf, bufSize,
            (short *)audioData);//------>returns 0 when using AV_SAMPLE_FMT_FLT
  • A newbie Struggling with FFmpeg Video Encoding

    19 avril 2014, par iJose

    From last 1 week i have been struggling with FFmpeg video encoding.
    I am capturing the video from the device camera using UIImagePickerController.
    and the encoding it using the following function.

    After Encoding i am not able to save the video to my device camera roll.

    i used UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(filepath) it returns Zero GOD KNOWS WHY

    +(void)videoEncoder:(NSString *)filename
    {
       avcodec_register_all();
       int codec_id = AV_CODEC_ID_MPEG4;
       AVCodec *codec;
       AVCodecContext *c= NULL;
       int i, ret, x, y, got_output;
       FILE *f;
       AVFrame *frame;
       AVPacket pkt;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       printf("Encode video file %s\n", [filename UTF8String]);

       /* find the mpeg1 video encoder */
       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       c = avcodec_alloc_context3(codec);
       if (!c) {
           fprintf(stderr, "Could not allocate video codec context\n");
           exit(1);
       }

       /* put sample parameters */
       c->bit_rate = 400000;
       /* resolution must be a multiple of two */
       c->width = 352;
       c->height = 288;
       /* frames per second */
       c->time_base= (AVRational){1,25};
       c->gop_size = 10; /* emit one intra frame every ten frames */
       c->max_b_frames=1;
       c->pix_fmt = AV_PIX_FMT_YUV420P;

       if(codec_id == AV_CODEC_ID_MPEG4)
           av_opt_set(c->priv_data, "preset", "slow", 0);

       /* open it */
       if (avcodec_open2(c, codec, NULL) < 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       f = fopen([filename UTF8String], "wb");
       if (!f)
       {
           fprintf(stderr, "Could not open %s\n", [filename UTF8String]);
           exit(1);
       }

       frame = av_frame_alloc();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
       frame->format = c->pix_fmt;
       frame->width  = c->width;
       frame->height = c->height;

       /* the image can be allocated by any means and av_image_alloc() is
        * just the most convenient way if av_malloc() is to be used */
       ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
                            c->pix_fmt, 32);
       if (ret < 0) {
           fprintf(stderr, "Could not allocate raw picture buffer\n");
           exit(1);
       }

       /* encode 1 second of video */
       for(i=0;i<25;i++) {
           av_init_packet(&pkt);
           pkt.data = NULL;    // packet data will be allocated by the encoder
           pkt.size = 0;

           fflush(stdout);
           /* prepare a dummy image */
           /* Y */
           for(y=0;yheight;y++) {
               for(x=0;xwidth;x++) {
                   frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
               }
           }

           /* Cb and Cr */
           for(y=0;yheight/2;y++) {
               for(x=0;xwidth/2;x++) {
                   frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
                   frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
               }
           }

           frame->pts = i;

           /* encode the image */
           ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
           if (ret < 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               printf("Write frame %3d (size=%5d)\n", i, pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_free_packet(&pkt);
           }
       }

       /* get the delayed frames */
       for (got_output = 1; got_output; i++) {
           fflush(stdout);

           ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
           if (ret < 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               printf("Write frame %3d (size=%5d)\n", i, pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_free_packet(&pkt);
           }
       }

       /* add sequence end code to have a real mpeg file */
       fwrite(endcode, 1, sizeof(endcode), f);
       fclose(f);

       avcodec_close(c);
       av_free(c);
       av_freep(&frame->data[0]);
       av_frame_free(&frame);
       printf("\n");
    }
  • Gstreamer pipeline to scale down video before streaming

    20 novembre 2014, par r3dsm0k3

    Here is what Im trying to achieve.

    Im streaming from a Logitech C920 camera on beaglebone black with gstreamer. I have to save a copy of the video saved locally while it is streaming. I have achieved that with tee.
    Logitech camera gives h264 encoded video at a certain bitrate, mostly very high.

    Im streaming from a moving car on 3G, and the network is not good enough to send the stream to nginx-rtmp server Im using to re-distribute thus gives strong artifacts in the result.

    Im able to alter the bitrate of captured video using uvch264.
    But then, the locally saved video also would have lower bitrate.

    Is there anyway of capturing a higher bitrate 1080p video from the camera and sending a lower resolution, lower bitrate video the streaming server ?

    Following is the pipeline I have currently.

    gst-launch-1.0 -v -e uvch264src initial-bitrate=400000 average-bitrate=400000 iframe-period=3000  device=/dev/video0 name=src auto-start=true  src.vidsrc ! queue ! video/x-h264,width=1920,height=1080,framerate=30/1 ! h264parse ! flvmux streamable=true  name=flvmuxer ! queue ! tee name=t ! queue ! filesink location=/mnt/test.flv t. ! queue ! rtmpsink location=$SERVER/hls/$CAM1

    I could also try sending the higher bitrate video to a udpsink instead of rtmpsink and with another gstreamer process parallely and takes the data using a udpsink and probably post process/ re-encode and send to rtmp server.

    Im also limited by the processing speed BeagleBone has to do for encoding the videos. Currently Im trying for 1 camera and in the finished project I would like to have 2 cameras connected. Upload speed Im getting for the network is under 1Mbps.

    How do I solve this with less load on the BeagleBone ? Im very open to a new architecture as well.