Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (42)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (3132)

  • using X264 and librtmp to send live camera frame, but the flash can't show

    9 mai 2016, par brian_wang

    I am using X264 and librtmp to send my live camera frame, all the things seems right. but my web test flash can’t show the correct video. Sometimes it seems correct, but when I re-click play button, it doesn’t show any picture on the flash.

    Here is my X264 config code

    x264_param_default_preset(&x264param, "ultrafast", "zerolatency");
    x264param.i_threads = 2;
    x264param.i_width = width;
    x264param.i_height = height;
    x264param.i_log_level = X264_LOG_DEBUG;
    x264param.i_fps_num = x264param.i_timebase_num= fps;
    x264param.i_fps_den = x264param.i_timebase_den=1;
    x264param.i_frame_total = 0;

    x264param.i_frame_reference =1;
    //x264param.i_frame_reference = 2;
    x264param.i_keyint_min = 25;
    x264param.i_keyint_max = fps*3;
    x264param.i_scenecut_threshold = 40;

    x264param.b_deblocking_filter = 1;
    x264param.b_cabac = 0;
    x264param.analyse.i_trellis = 0;
    x264param.analyse.b_chroma_me = 1;

    x264param.vui.i_sar_width = 0;
    x264param.vui.i_sar_height = 0;
    x264param.i_bframe_bias = 0;
    x264param.b_interlaced= 0;
    x264param.analyse.i_subpel_refine = 6; /* 0..5 -> 1..6 */
    x264param.analyse.i_me_method = X264_ME_DIA;//X264_ME_HEX?X264_ME_DIA
    x264param.analyse.i_me_range = 16;
    x264param.analyse.i_direct_mv_pred = X264_DIRECT_PRED_AUTO;
    x264param.i_deblocking_filter_alphac0 = 0;
    x264param.i_deblocking_filter_beta = 0;
    //x264param.analyse.intra = X264_ANALYSE_I4x4;
    x264param.analyse.intra = X264_ANALYSE_I4x4;//  | X264_ANALYSE_PSUB16x16 | X264_ANALYSE_BSUB16x16;
    x264param.analyse.inter = X264_ANALYSE_I4x4 | X264_ANALYSE_PSUB16x16 | X264_ANALYSE_BSUB16x16;
    //edit 2014-7-28
    x264param.analyse.b_transform_8x8 = 1;
    //x264param.analyse.b_transform_8x8 = 0;
    x264param.analyse.b_fast_pskip = 1;
    x264param.i_bframe = 0;
    //x264param.b_intra_refresh
    x264param.analyse.b_weighted_bipred = 0;

    //// Intra refres:
    x264param.i_keyint_max = 250;
    x264param.b_intra_refresh = 0;
    ////Rate control:
    //x264param.rc.i_rc_method = X264_RC_CRF;

    //Rate Control
    x264param.rc.f_ip_factor = 1.4f;
    x264param.rc.f_pb_factor = 1.3f;
    x264param.rc.f_qcompress = 1.0;
    x264param.rc.i_qp_min = 20;//20;
    x264param.rc.i_qp_max = 32;
    x264param.rc.i_qp_step = 1;

    switch (0)
    {
    case 0: /* 1 PASS ABR */
       x264param.rc.i_rc_method = X264_RC_ABR;
       x264param.rc.i_bitrate = 300; // max = 5000
       x264param.rc.b_mb_tree = 0;
       break;
    case 1: /* 1 PASS CQ */
       x264param.rc.i_rc_method = X264_RC_CQP;
       x264param.rc.i_qp_constant = 26;//10 - 51
       break;
    }

    //For streaming:
    x264param.b_repeat_headers = 1;
    x264param.b_annexb = 1;
    x264_param_apply_profile(&x264param, "baseline");
    encoder = x264_encoder_open(&x264param);
    x264_picture_init( &pic_in );
    x264_picture_alloc(&pic_in, X264_CSP_I420, width, height);

    pic_in.img.i_csp = X264_CSP_I420|X264_CSP_VFLIP;
    pic_in.img.i_plane = 3;
    pic_in.i_type = X264_TYPE_AUTO;

    Sending To RTMP :

    sws_scale(convertCtx,&a,&scribe,0,height, pic_in.img.plane, pic_in.img.i_stride);
    int i_nal;
    int i_frame_size = x264_encoder_encode( encoder, &nal, &i_nal, &pic_in, &pic_out );
    if(i_frame_size <= 0){
       printf("\t!!!FAILED encode frame \n");
    }else{
       for (int i = 0,last=0; i < i_nal;i++)  
       {
           fwrite(nal[i].p_payload, 1, i_frame_size-last, fpw1);
           if (nal[i].i_type == NAL_SPS) {
               sps_len = nal[i].i_payload-4;
               sps  = new unsigned char[sps_len];
               memcpy(sps,nal[i].p_payload+4,sps_len);
           } else if (nal[i].i_type == NAL_PPS) {
               pps_len = nal[i].i_payload-4;
               pps  = new unsigned char[sps_len];
               memcpy(pps,nal[i].p_payload+4,pps_len);
               send_video_sps_pps();
               free(sps);
               free(pps);
           } else {
               send_rtmp_video(nal[i].p_payload,i_frame_size-last);
               break;
           }
           last += nal[i].i_payload;
       }
    }

    Send PPS and SPS

    void send_video_sps_pps(){
    if(rtmp!= NULL){
       RTMPPacket * packet;
       unsigned char * body;
       int i;

       packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE+1024);
       memset(packet,0,RTMP_HEAD_SIZE);

       packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
       body = (unsigned char *)packet->m_body;
       i = 0;
       body[i++] = 0x17;
       body[i++] = 0x00;

       body[i++] = 0x00;
       body[i++] = 0x00;
       body[i++] = 0x00;

       /*AVCDecoderConfigurationRecord*/
       body[i++] = 0x01;
       body[i++] = sps[1];
       body[i++] = sps[2];
       body[i++] = sps[3];
       body[i++] = 0xff;

       /*sps*/
       body[i++]   = 0xe1;
       body[i++] = (sps_len >> 8) & 0xff;
       body[i++] = sps_len & 0xff;
       memcpy(&body[i],sps,sps_len);
       i +=  sps_len;

       /*pps*/
       body[i++]   = 0x01;
       body[i++] = (pps_len >> 8) & 0xff;
       body[i++] = (pps_len) & 0xff;
       memcpy(&body[i],pps,pps_len);
       i +=  pps_len;

       packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
       packet->m_nBodySize = i;
       packet->m_nChannel = 0x04;
       packet->m_nTimeStamp = 0;
       packet->m_hasAbsTimestamp = 0;
       packet->m_headerType = RTMP_PACKET_SIZE_MEDIUM;
       packet->m_nInfoField2 = rtmp->m_stream_id;

       RTMP_SendPacket(rtmp,packet,TRUE);
       free(packet);  
       rtmp_start_time = GetTickCount();
    }else{
       std::cout<<"RTMP is not ready"<code>

    Send video Frame

    void send_rtmp_video(unsigned char * buf,int len){
    RTMPPacket * packet;
    long timeoffset = GetTickCount() - rtmp_start_time;

    int type = buf[0]&0x1f;

    packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE+len+9);
    memset(packet,0,RTMP_HEAD_SIZE);

    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = len + 9;

    /*send video packet*/
    unsigned char *body = (unsigned char *)packet->m_body;
    memset(body,0,len+9);

    /*key frame*/
    body[0] = 0x27;
    if (type == NAL_SLICE_IDR) {
       body[0] = 0x17;
    }

    body[1] = 0x01;   /*nal unit*/
    body[2] = 0x00;
    body[3] = 0x00;
    body[4] = 0x00;

    body[5] = (len >> 24) & 0xff;
    body[6] = (len >> 16) & 0xff;
    body[7] = (len >>  8) & 0xff;
    body[8] = (len ) & 0xff;

    /*copy data*/
    memcpy(&body[9],buf,len);

    packet->m_hasAbsTimestamp = 0;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    if(rtmp != NULL){
       packet->m_nInfoField2 = rtmp->m_stream_id;
    }
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = timeoffset;

    if(rtmp != NULL){
       RTMP_SendPacket(rtmp,packet,TRUE);
    }
    free(packet);

    }

  • FFMPEG, the penultimate image from the .txt file doesn't show in the video

    25 octobre 2019, par ArmKh

    I’m trying to create a video from the images using ffmeg. Actually, the creating of the video works but there is a small problem. I have a text file with the names of the images (paths) which I’m using in the video. The file looks like this

    file 'image1.jpg'
    file 'image2.jpg'
    file 'image3.jpg'
    file 'image4.jpg'
    file 'image5.jpg'

    And the ffmeg command is following

    ffmpeg -y -r 1/5 -f concat -safe 0 -i imagenames.txt -i some_audio.mp3 -c:v libx264 -vf fps=30 -pix_fmt yuv420p -t 30 output.mp4

    This video should keep the frame on every image for 5 seconds. But the problem is the penultimate image ( image4 in this case ) is not being shown in the video. So, it keeps on image3 10 seconds and moves to image5

    So, the video in seconds looks like this

    [image1] -> [image2] -> [image3] -> [image4] -> [image5]
      5sec        5sec       10sec        0sec        5sec

    And the problem is not with the image4 exactly. In case of swapping image3 and image4, the video will skip image3

    [image1] -> [image2] -> [image4] -> [image3] -> [image5]
      5sec        5sec       10sec        0sec        5sec

    Hope you’ll have any suggestions to fix this issue

  • ffprobe does not show packet size of a mpeg transport stream as 188 bytes

    16 octobre 2015, par CompNet

    I created a transport stream from a H.264 encoded file using the following ffmpeg command :

    ffmpeg -i encoded.mp4 -c copy -map 0 -vbsf h264_mp4toannexb mpegts sample.ts

    Now I want to check the frames and packets within the transport stream. I used

    ffprobe -show_frames

    which shows the frame details for audio and video frames. But I’m confused about the pkt_size field. Is it the actual frame size of each elementary stream of audio and video (I/B/P frames) ?

    Also, when I run

    ffprobe -show_packets

    is it supposed to give each packet details in the transport stream ? Because the size field of each packet is not 188 bytes, rather it is same as the pkt_size I got with -show_frames.

    Could someone please explain why the size in -show_packets of transport stream is not 188 bytes ? Did I do anything wrong while multiplexing mp4 to TS ?