Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (67)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (11926)

  • Does PTS have to start at 0 ?

    5 juillet 2018, par stevendesu

    I’ve seen a number of questions regarding video PTS values not starting at zero, or asking how to make them start at zero. I’m aware that using ffmpeg I can do something like ffmpeg -i <video> -vf="setpts=PTS-STARTPTS" <output></output></video> to fix this kind of thing

    However it’s my understanding that PTS values don’t have to start at zero. For instance, if you join a live stream then odds are it has been going on for an hour and the PTS is already somewhere around 3600000+ but your video player faithfully displays everything just fine. Therefore I would expect there to be no problem if I intentionally created a video with a PTS value starting at, say, the current wall clock time.

    I want to send a live stream using ffmpeg, but embed the current time into the stream. This can be used both for latency calculation while the stream is live, and later to determine when the stream was originally aired. From my understanding of PTS, something as simple as this should probably work :

    ffmpeg -i video.flv -vf="setpts=RTCTIME" rtmp://<output>
    </output>

    When I try this, however, ffmpeg outputs the following :

    frame=   93 fps= 20 q=-1.0 Lsize=    9434kB time=535020:39:58.70 bitrate=   0.0kbits/s speed=1.35e+11x

    Note the extremely large value for "time", the bitrate (0.0kbits), and the speed (135000000000x !!!)

    At first I thought the issue might be my timebase, so I tried the following :

    ffmpeg -i video.flv -vf="settb=1/1K,setpts=RTCTIME/1K" rtmp://<output>
    </output>

    This puts everything in terms of milliseconds (1 PTS = 1 ms) but I had the same issue (massive time, zero bitrate, and massive speed)

    Am I misunderstanding something about PTS ? Is it not allowed to start at non-zero values ? Or am I just doing something wrong ?

    Update

    After reviewing @Gyan’s answer, I formatted my command like so :

    ffmpeg -re -i video.flv -vf="settb=1/1K, setpts=(RTCTIME-RTCSTART)/1K" -output_ts_offset $(date +%s.%N) rtmp://<output>
    </output>

    This way the PTS values would match up to "milliseconds since the stream started" and would be offset by the start time of the stream (theoretically making PTS = timestamp on the server)

    This looked like it was encoding better :

    frame=  590 fps=7.2 q=22.0 size=   25330kB time=00:01:21.71 bitrate=2539.5kbits/s dup=0 drop=1350 speed=   1x

    Bitrate was now correct, time was accurate, and speed was not outrageous. The frames per second was still a bit off, though (the source video is 24 fps but it’s reporting 7.2 frames per second)

    When I tried watching the stream from the other end, the video was out of sync with the audio and played at about double normal speed for a while, then the video froze and the audio continued without it

    Furthermore, when I dumped the stream to a file (ffmpeg -i rtmp://<output> dump.mp4</output>) and look at the PTS timestamps with ffprobe (ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2) the timestamps didn’t seem to show server time at all :

    ...
    --
    [PACKET]
    codec_type=video
    pts=131072
    [/PACKET]
    [PACKET]
    codec_type=video
    pts=130048
    [/PACKET]
    --
    [PACKET]
    codec_type=video
    pts=129536
    [/PACKET]
    [PACKET]
    codec_type=video
    pts=130560
    [/PACKET]
    --
    [PACKET]
    codec_type=video
    pts=131584
    [/PACKET]

    Is the problem just an incompatibility with RTMP ?

    Update 2

    I’ve removed the video filter and I’m now encoding like so :

    ffmpeg -re -i video.flv -output_ts_offset $(date +%s.%N) rtmp://<output>
    </output>

    This is encoding correctly :

    frame=  910 fps= 23 q=25.0 size=   12027kB time=00:00:38.97 bitrate=2528.2kbits/s speed=0.981x

    In order to verify that the PTS values are correct, I’m dumping the output to a file like so :

    ffmpeg -i rtmp://<output> -copyts -write_tmcd 0 dump.mp4
    </output>

    I tried saving it as dump.flv (since it’s RTMP) however this threw the error :

    [flv @ 0x5600f24b4620] Audio codec mp3 not compatible with flv

    This is a bit weird since the video isn’t mp3-encoded (it’s speex) - but whatever.

    While dumping this file the following error pops up repeatedly :

    frame=    1 fps=0.0 q=0.0 size=       0kB time=00:00:09.21 bitrate=   0.0kbits/s dup=0 dr
    43090023 frame duplication too large, skipping
    43090027 frame duplication too large, skipping
       Last message repeated 3 times
    43090031 frame duplication too large, skipping
       Last message repeated 3 times
    43090035 frame duplication too large, skipping

    Playing the resulting video in VLC plays an audio stream but displays no video. I then attempt to probe this video with ffprobe to look at the video PTS values :

    ffprobe -show_entries packet=codec_type,pts dump.mp4 | grep "video" -B 1 -A 2

    This returns only a single video frame whose PTS is not large like I would expect :

    [PACKET]
    codec_type=video
    pts=1020
    [/PACKET]

    This has been a surprisingly difficult task

  • HLS : How to detect out of order segments in media playlist ?

    27 juin 2018, par anirudh612

    What would be an efficient way to detect if an http live streaming VOD playlist has segments out of order (and count how many segments are out of order) ? They are ordered correctly based on the #EXT-X-PROGRAM-DATETIME tag but the segment decoding timestamps in some cases are out of order. Currently, the workflow I’m following is :

    1. Convert the HLS stream into an mp4 using ffmpeg :

      ffmpeg -i http://localhost:8080/test/unsorted.m3u8 -c copy -bsf:a aac_adtstoasc test/unsorted.mp4 &> test/unsorted_ffmpeg.log

    2. Inspect the logs and count number of occurrences of "Non-monotonous DTS in output stream" log messages :

      [mp4 @ 0x7fe74f01b000] Non-monotonous DTS in output stream 0:1 ; previous : 12063760, current : 11866128 ; changing to 12063761. This may result in incorrect timestamps in the output file.

      However, this requires downloading and reading all of the ts segments and is an expensive operation. Is there a more efficient way to determine out of order DTS or PTS in chunks using ffmpeg or ffprobe ?

  • RTMP Broadcast packet body structure for Twitch

    22 mai 2018, par Dobby

    I’m currently working on a project similar to OBS, where I’m capturing screen data, encoding it with the x264 library, and then broadcasting it to a twitch server.

    Currently, the servers are accepting the data, but no video is being played - it buffers for a moment, then returns an error code "2000 : network error"

    Like OBS Classic, I’m dividing each NAL provided by x264 by its type, and then making changes to each

    int frame_size = x264_encoder_encode(encoder, &amp;nals, &amp;num_nals, &amp;pic_in, &amp;pic_out);

       //sort the NAL's into their types and make necessary adjustments

       int timeOffset = int(pic_out.i_pts - pic_out.i_dts);

       timeOffset = htonl(timeOffset);//host to network translation, ensure the bytes are in the right format
       BYTE *timeOffsetAddr = ((BYTE*)&amp;timeOffset) + 1;

       videoSection sect;
       bool foundFrame = false;

       uint8_t * spsPayload = NULL;
       int spsSize = 0;

       for (int i = 0; i/std::cout &lt;&lt; "VideoEncoder: EncodedImages Size: " &lt;&lt; encodedImages->size() &lt;&lt; std::endl;
           x264_nal_t &amp;nal = nals[i];
           //std::cout &lt;&lt; "NAL is:" &lt;&lt; nal.i_type &lt;&lt; std::endl;

           //need to account for pps/sps, seems to always be the first frame sent
           if (nal.i_type == NAL_SPS) {
               spsSize = nal.i_payload;
               spsPayload = (uint8_t*)malloc(spsSize);
               memcpy(spsPayload, nal.p_payload, spsSize);
           } else if (nal.i_type == NAL_PPS){
               //pps always happens after sps
               if (spsPayload == NULL) {
                   std::cout &lt;&lt; "VideoEncoder: critical error, sps not set" &lt;&lt; std::endl;
               }
               uint8_t * payload = (uint8_t*)malloc(nal.i_payload + spsSize);
               memcpy(payload, spsPayload, spsSize);
               memcpy(payload, nal.p_payload + spsSize, nal.i_payload);
               sect = { nal.i_payload + spsSize, payload, nal.i_type };
               encodedImages->push(sect);
           } else if (nal.i_type == NAL_SEI || nal.i_type == NAL_FILLER) {
               //these need some bytes at the start removed
               BYTE *skip = nal.p_payload;
               while (*(skip++) != 0x1);
               int skipBytes = (int)(skip - nal.p_payload);

               int newPayloadSize = (nal.i_payload - skipBytes);

               uint8_t * payload = (uint8_t*)malloc(newPayloadSize);
               memcpy(payload, nal.p_payload + skipBytes, newPayloadSize);
               sect = { newPayloadSize, payload, nal.i_type };
               encodedImages->push(sect);

           } else if (nal.i_type == NAL_SLICE_IDR || nal.i_type == NAL_SLICE) {
               //these packets need an additional section at the start
               BYTE *skip = nal.p_payload;
               while (*(skip++) != 0x1);
               int skipBytes = (int)(skip - nal.p_payload);

               std::vector<byte> bodyData;
               if (!foundFrame) {
                   if (nal.i_type == NAL_SLICE_IDR) { bodyData.push_back(0x17); } else { bodyData.push_back(0x27); } //add a 17 or a 27 as appropriate
                   bodyData.push_back(1);
                   bodyData.push_back(*timeOffsetAddr);

                   foundFrame = true;
               }

               //put into the payload the bodyData followed by the nal payload
               uint8_t * bodyDataPayload = (uint8_t*)malloc(bodyData.size());
               memcpy(bodyDataPayload, bodyData.data(), bodyData.size() * sizeof(BYTE));

               int newPayloadSize = (nal.i_payload - skipBytes);

               uint8_t * payload = (uint8_t*)malloc(newPayloadSize + sizeof(bodyDataPayload));
               memcpy(payload, bodyDataPayload, sizeof(bodyDataPayload));
               memcpy(payload + sizeof(bodyDataPayload), nal.p_payload + skipBytes, newPayloadSize);
               int totalSize = newPayloadSize + sizeof(bodyDataPayload);
               sect = { totalSize, payload, nal.i_type };
               encodedImages->push(sect);
           } else {
               std::cout &lt;&lt; "VideoEncoder: Nal type did not match expected" &lt;&lt; std::endl;
               continue;
           }
       }
    </byte>

    The NAL payload data is then put into a struct, VideoSection, in a queue buffer

    //used to transfer encoded data
    struct videoSection {
       int frameSize;
       uint8_t* payload;
       int type;
    };

    After which it is picked up by the broadcaster, a few more changes are made, and then I call rtmp_send()

    videoSection sect = encodedImages->front();
    encodedImages->pop();

    //std::cout &lt;&lt; "Broadcaster: Frame Size: " &lt;&lt; sect.frameSize &lt;&lt; std::endl;

    //two methods of sending RTMP data, _sendpacket and _write. Using sendpacket for greater control

    RTMPPacket * packet;

    unsigned char* buf = (unsigned char*)sect.payload;

    int type = buf[0]&amp;0x1f; //I believe &amp;0x1f sets a 32bit limit
    int len = sect.frameSize;
    long timeOffset = GetTickCount() - rtmp_start_time;

    //assign space packet will need
    packet = (RTMPPacket *)malloc(sizeof(RTMPPacket)+RTMP_MAX_HEADER_SIZE + len + 9);
    memset(packet, 0, sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE);

    packet->m_body = (char *)packet + sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE;
    packet->m_nBodySize = len + 9;

    //std::cout &lt;&lt; "Broadcaster: Packet Size: " &lt;&lt; sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE + len + 9 &lt;&lt; std::endl;
    //std::cout &lt;&lt; "Broadcaster: Packet Body Size: " &lt;&lt; len + 9 &lt;&lt; std::endl;

    //set body to point to the packetbody
    unsigned char *body = (unsigned char *)packet->m_body;
    memset(body, 0, len + 9);



    //NAL_SLICE_IDR represents keyframe
    //first element determines packet type
    body[0] = 0x27;//inter-frame h.264
    if (sect.type == NAL_SLICE_IDR) {
       body[0] = 0x17; //h.264 codec id
    }


    //-------------------------------------------------------------------------------
    //this section taken from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
    //in an effort to understand packet format. it does not resolve my previous issues formatting the data for twitch to play it

    //sets body to be NAL unit
    body[1] = 0x01;
    body[2] = 0x00;
    body[3] = 0x00;
    body[4] = 0x00;

    //>> is a shift right
    //shift len to the right, and AND it
    /*body[5] = (len >> 24) &amp; 0xff;
    body[6] = (len >> 16) &amp; 0xff;
    body[7] = (len >> 8) &amp; 0xff;
    body[8] = (len) &amp; 0xff;*/

    //end code sourced from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
    //-------------------------------------------------------------------------------

    //copy from buffer into rest of body
    memcpy(&amp;body[9], buf, len);

    //DEBUG

    //save individual packet body to a file with name rtmp[packetnum]
    //determine why some packets do not have 0x27 or 0x17 at the start
    //still happening, makes no sense given the above code

    /*std::string fileLocation = "rtmp" + std::to_string(packCount++);
    std::cout &lt;&lt; fileLocation &lt;&lt; std::endl;
    const char * charConversion = fileLocation.c_str();

    FILE* saveFile = NULL;
    saveFile = fopen(charConversion, "w+b");//open as write and binary
    if (!fwrite(body, len + 9, 1, saveFile)) {
       std::cout &lt;&lt; "VideoEncoder: Error while trying to write to file" &lt;&lt; std::endl;
    }
    fclose(saveFile);*/

    //END DEBUG

    //other packet details
    packet->m_hasAbsTimestamp = 0;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    if (rtmp != NULL) {
       packet->m_nInfoField2 = rtmp->m_stream_id;
    }
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = timeOffset;

    //send the packet
    if (rtmp != NULL) {
       RTMP_SendPacket(rtmp, packet, TRUE);
    }

    I can see that Twitch is receiving the data in the inspector, at a steady 3kbps. so I’m sure something is wrong with how I’m adjusting the data before sending it. Can anyone advise me on what I’m doing wrong here ?