Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (97)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5854)

  • Converting large (4gb) avi file to mpeg or mp4 format using C#

    16 septembre 2014, par user2330678

    I have successfully converted avi files to Mpeg using NREco converter http://www.nrecosite.com/video_converter_net.aspx

    But, the length (duration) of the converted video is never greater than 2mins, 35 secs.
    I tried using ffmpeg command line utility (https://www.ffmpeg.org/download.html or http://ffmpeg.zeranoe.com/builds/ ffmpeg 64 bit static for windows) but the length was always less than or equal to 2mins, 35 seconds.
    How to increase the duration of the ffmpeg converted video ?

    I tried the -t command but couldn’t increase the length (duration) of the converted video. Original video is a 14mins 5 sec avi file.

    ffmpeg -i inputAVIfilename outputMPEGfilename
    ffmpeg -i inputAVIfilename -t 90000 outputMPEGfilename

    The video file has only bitmap images. No sound tracks are required.

    Please note that my dll would be used with both windows & web applications.

  • Rails 5 - Concurrent large Video uploads using Carrierwave eats up the server memory/space

    22 mars 2020, par Milind

    I have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.

    So far, what is working great is listed below -

    1. User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
    2. Once the video is uploaded on the server, it starts streaming it by :-
      • FIRST,copying video on /tmp folder.
      • Running a bash script that uses ffmpeg to transcode uploaded video using predefined commands to produce new fragments of videos inside /tmp folder.
      • Once the background job is done, all the videos are uploaded on AWS S3, which is how the default carrierwave works
    3. So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to S3.

    My questions, where i am looking some help are listed below -

    1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my /tmp folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?

    2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.

    3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.

    4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?

    Thanks in advance.

  • How to send large x264 NAL over RTMP ?

    17 septembre 2017, par samgak

    I’m trying to stream video over RTMP using x264 and rtmplib in C++ on Windows.

    So far I have managed to encode and stream a test video pattern consisting of animated multi-colored vertical lines that I generate in code. It’s possible to start and stop the stream, and start and stop the player, and it works every time. However, as soon as I modify it to send encoded camera frames instead of the test pattern, the streaming becomes very unreliable. It only starts <20% of the time, and stopping and restarting doesn’t work.

    After searching around for answers I concluded that it must be because the NAL size is too large (my test pattern is mostly flat color so it encodes to a very small size), and there is an Ethernet packet limit of around 1400 bytes that affects it. So, I tried to make x264 only output NALs under 1200 bytes, by setting i_slice_max_size in my x264 setup :

    if (x264_param_default_preset(&amp;param, "veryfast", "zerolatency") &lt; 0)
       return false;
    param.i_csp = X264_CSP_I420;
    param.i_threads = 1;
    param.i_width = width;  //set frame width
    param.i_height = height;  //set frame height
    param.b_cabac = 0;
    param.i_bframe = 0;
    param.b_interlaced = 0;
    param.rc.i_rc_method = X264_RC_ABR;
    param.i_level_idc = 21;
    param.rc.i_bitrate = 128;
    param.b_intra_refresh = 1;
    param.b_annexb = 1;
    param.i_keyint_max = 25;
    param.i_fps_num = 15;
    param.i_fps_den = 1;

    param.i_slice_max_size = 1200;

    if (x264_param_apply_profile(&amp;param, "baseline") &lt; 0)
       return false;

    This reduces the NAL size, but it doesn’t seem to make any difference to the reliability issues.

    I’ve also tried fragmenting the NALs, using this Java code and RFC 3984 (RTP Payload Format for H.264 Video) as a reference, but it doesn’t work at all (code below), the server says "stream has stopped" immediately after it starts. I’ve tried including and excluding the NAL header (with the timestamp etc) in each fragment or just the first, but it doesn’t work for me either way.

    I’m pretty sure my issue has to be with the NAL size and not PPS/SPS or anything like that (as in this question) or with my network connection or test server, because everything works fine with the test pattern.

    I’m sending NAL_PPS and NAL_SPS (only once), and all NAL_SLICE_IDR and NAL_SLICE packets. I’m ignoring NAL_SEI and not sending it.

    One thing that is confusing me is that the source code that I can find on the internet that does similar things to what I want doesn’t match up with what the RFC specifies. For example, RFC 3984 section 5.3 defines the NAL octet, which should have the NAL type in the lower 5 bits and the NRI in bits 5 and 6 (bit 7 is zero). The types NAL_SLICE_IDR and NAL_SLICE have values of 5 and 1 respectively, which are the ones in table 7-1 of this document (PDF) referenced by the RFC and also the ones output by x264. But the code that actually works sets the NAL octet to 39 (0x27) and 23 (0x17), for reasons unknown to me. When implementing fragmented NALs, I’ve tried both following the spec and using the values copied over from the working code, but neither works.

    Any help appreciated.

    void sendNAL(unsigned char* buf, int len)
    {
       Logging::LogNumber("sendNAL", len);
       RTMPPacket * packet;
       long timeoffset = GetTickCount() - startTime;

       if (buf[2] == 0x00) { //00 00 00 01
           buf += 4;
           len -= 4;
       }
       else if (buf[2] == 0x01) { //00 00 01
           buf += 3;
           len -= 3;
       }
       else
       {
           Logging::LogStdString("INVALID x264 FRAME!");
       }
       int type = buf[0] &amp; 0x1f;  
       int maxNALSize = 1200;

       if (len &lt;= maxNALSize)
       {
           packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + len + 9);
           memset(packet, 0, RTMP_HEAD_SIZE);

           packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
           packet->m_nBodySize = len + 9;

           unsigned char *body = (unsigned char *)packet->m_body;
           memset(body, 0, len + 9);

           body[0] = 0x27;
           if (type == NAL_SLICE_IDR) {
               body[0] = 0x17;
           }

           body[1] = 0x01;   //nal unit
           body[2] = 0x00;
           body[3] = 0x00;
           body[4] = 0x00;

           body[5] = (len >> 24) &amp; 0xff;
           body[6] = (len >> 16) &amp; 0xff;
           body[7] = (len >> 8) &amp; 0xff;
           body[8] = (len) &amp; 0xff;

           memcpy(&amp;body[9], buf, len);

           packet->m_hasAbsTimestamp = 0;
           packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
           if (rtmp != NULL) {
               packet->m_nInfoField2 = rtmp->m_stream_id;
           }
           packet->m_nChannel = 0x04;
           packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
           packet->m_nTimeStamp = timeoffset;

           if (rtmp != NULL) {
               RTMP_SendPacket(rtmp, packet, QUEUE_RTMP);
           }
           free(packet);
       }
       else
       {
           packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + maxNALSize + 90);
           memset(packet, 0, RTMP_HEAD_SIZE);      

           // split large NAL into multiple smaller ones:
           int sentBytes = 0;
           bool firstFragment = true;
           while (sentBytes &lt; len)
           {
               // decide how many bytes to send in this fragment:
               int fragmentSize = maxNALSize;
               if (sentBytes + fragmentSize > len)
                   fragmentSize = len - sentBytes;
               bool lastFragment = (sentBytes + fragmentSize) >= len;

               packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
               int headerBytes = firstFragment ? 10 : 2;
               packet->m_nBodySize = fragmentSize + headerBytes;

               unsigned char *body = (unsigned char *)packet->m_body;
               memset(body, 0, fragmentSize + headerBytes);

               //key frame
               int NALtype = 0x27;
               if (type == NAL_SLICE_IDR) {
                   NALtype = 0x17;
               }

               // Set FU-A indicator
               body[0] = (byte)((NALtype &amp; 0x60) &amp; 0xFF); // FU indicator NRI
               body[0] += 28; // 28 = FU - A (fragmentation unit A) see RFC: https://tools.ietf.org/html/rfc3984

               // Set FU-A header
               body[1] = (byte)(NALtype &amp; 0x1F);  // FU header type
               body[1] += (firstFragment ? 0x80 : 0) + (lastFragment ? 0x40 : 0); // Start/End bits

               body[2] = 0x01;   //nal unit
               body[3] = 0x00;
               body[4] = 0x00;
               body[5] = 0x00;

               body[6] = (len >> 24) &amp; 0xff;
               body[7] = (len >> 16) &amp; 0xff;
               body[8] = (len >> 8) &amp; 0xff;
               body[9] = (len) &amp; 0xff;

               //copy data
               memcpy(&amp;body[headerBytes], buf + sentBytes, fragmentSize);

               packet->m_hasAbsTimestamp = 0;
               packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
               if (rtmp != NULL) {
                   packet->m_nInfoField2 = rtmp->m_stream_id;
               }
               packet->m_nChannel = 0x04;
               packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
               packet->m_nTimeStamp = timeoffset;

               if (rtmp != NULL) {
                   RTMP_SendPacket(rtmp, packet, TRUE);
               }

               sentBytes += fragmentSize;
               firstFragment = false;
           }

           free(packet);
       }
    }