Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (95)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9187)

  • Save frame as image during video stream using ffmpeg, in C

    30 avril 2017, par George T.

    I’m using the Parrot SDK3 to retrieve the video stream from a Bebop2 drone. From it I need to "extract" a frame and save it as an image.

    The whole code can be found here, but I shall try to include the (what I understand as) important parts here.

    Where the video is sent to mplayer. I assume from this that the video format is h624.

    if (videoOut != NULL)
    {
       if (codec.type == ARCONTROLLER_STREAM_CODEC_TYPE_H264)
       {
           if (DISPLAY_WITH_MPLAYER)
           {
               fwrite(codec.parameters.h264parameters.spsBuffer, codec.parameters.h264parameters.spsSize, 1, videoOut);
               fwrite(codec.parameters.h264parameters.ppsBuffer, codec.parameters.h264parameters.ppsSize, 1, videoOut);

               fflush (videoOut);
           }
       }
    }

    Where the frame is received and written to videoOut, to be displayed

    eARCONTROLLER_ERROR didReceiveFrameCallback (ARCONTROLLER_Frame_t *frame, void *customData)
    {
       if (videoOut != NULL)
       {
           if (frame != NULL)
           {
               if (DISPLAY_WITH_MPLAYER)
               {
                   fwrite(frame->data, frame->used, 1, videoOut);

                   fflush (videoOut);
               }
           }
       }
       return ARCONTROLLER_OK;
    }

    The idea I had in mind is to fwrite(frame->data, frame->used, 1, videoOut); to a different file but that just creates a corrupted file, probably because of encoding. In what way can I get this frame and store it to a separate image ? The preferable image filetype .png, .jpg or .gif

    Any help will be greatly appreciated ! Thanks in advance !

  • FFmpeg - MJPEG decoding - getting different values

    27 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine.

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give —filetype png to test the png decoding).

  • ffmpeg is not working while uploading file using uploadify

    21 janvier 2016, par Ranjith

    for a particular video i tried like this, and it worked

    exec('/usr/bin/ffmpeg -i /home/xxxxxx/public_html/test/video1.mp4  /home/xxxxxxx/public_html/test/video1.flv');

    but for uploadify i write code like this ,

    <?php

    if (!empty($_FILES)) {
    $userId=$_SESSION["user_userid"];
    $filename = $_FILES['Filedata']['name'];
    $filetmpname = $_FILES['Filedata']['tmp_name'];
    $fileType = $_FILES["Filedata"]["type"];
    $fileSizeMB = ($_FILES["Filedata"]["size"] / 1024 / 1024);
    $folder=$_REQUEST['folder'];

    exec("/usr/bin/ffmpeg -i"."/home/xxxxxx/public_html/private/".$folder."/".$filename." "."/home/xxxxxx/public_html/private/".$folder."/".$filename.".flv");

    }elseif($_POST['d']){
    $filename = $_POST['d'];
    $folder=$_REQUEST['folder'];
    $dFile = $folder.$filename;
    if(file_exists($dFile)){
       unlink($dFile);
    }
    }
    ?>

    this code is not converting the uploaded file.
    help me please.

    thanks