Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (54)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6563)

  • HLS : How to detect out of order segments in media playlist ?

    27 juin 2018, par anirudh612

    What would be an efficient way to detect if an http live streaming VOD playlist has segments out of order (and count how many segments are out of order) ? They are ordered correctly based on the #EXT-X-PROGRAM-DATETIME tag but the segment decoding timestamps in some cases are out of order. Currently, the workflow I’m following is :

    1. Convert the HLS stream into an mp4 using ffmpeg :

      ffmpeg -i http://localhost:8080/test/unsorted.m3u8 -c copy -bsf:a aac_adtstoasc test/unsorted.mp4 &> test/unsorted_ffmpeg.log

    2. Inspect the logs and count number of occurrences of "Non-monotonous DTS in output stream" log messages :

      [mp4 @ 0x7fe74f01b000] Non-monotonous DTS in output stream 0:1 ; previous : 12063760, current : 11866128 ; changing to 12063761. This may result in incorrect timestamps in the output file.

      However, this requires downloading and reading all of the ts segments and is an expensive operation. Is there a more efficient way to determine out of order DTS or PTS in chunks using ffmpeg or ffprobe ?

  • RTMP Broadcast packet body structure for Twitch

    22 mai 2018, par Dobby

    I’m currently working on a project similar to OBS, where I’m capturing screen data, encoding it with the x264 library, and then broadcasting it to a twitch server.

    Currently, the servers are accepting the data, but no video is being played - it buffers for a moment, then returns an error code "2000 : network error"

    Like OBS Classic, I’m dividing each NAL provided by x264 by its type, and then making changes to each

    int frame_size = x264_encoder_encode(encoder, &nals, &num_nals, &pic_in, &pic_out);

       //sort the NAL's into their types and make necessary adjustments

       int timeOffset = int(pic_out.i_pts - pic_out.i_dts);

       timeOffset = htonl(timeOffset);//host to network translation, ensure the bytes are in the right format
       BYTE *timeOffsetAddr = ((BYTE*)&timeOffset) + 1;

       videoSection sect;
       bool foundFrame = false;

       uint8_t * spsPayload = NULL;
       int spsSize = 0;

       for (int i = 0; i/std::cout << "VideoEncoder: EncodedImages Size: " << encodedImages->size() << std::endl;
           x264_nal_t &nal = nals[i];
           //std::cout << "NAL is:" << nal.i_type << std::endl;

           //need to account for pps/sps, seems to always be the first frame sent
           if (nal.i_type == NAL_SPS) {
               spsSize = nal.i_payload;
               spsPayload = (uint8_t*)malloc(spsSize);
               memcpy(spsPayload, nal.p_payload, spsSize);
           } else if (nal.i_type == NAL_PPS){
               //pps always happens after sps
               if (spsPayload == NULL) {
                   std::cout << "VideoEncoder: critical error, sps not set" << std::endl;
               }
               uint8_t * payload = (uint8_t*)malloc(nal.i_payload + spsSize);
               memcpy(payload, spsPayload, spsSize);
               memcpy(payload, nal.p_payload + spsSize, nal.i_payload);
               sect = { nal.i_payload + spsSize, payload, nal.i_type };
               encodedImages->push(sect);
           } else if (nal.i_type == NAL_SEI || nal.i_type == NAL_FILLER) {
               //these need some bytes at the start removed
               BYTE *skip = nal.p_payload;
               while (*(skip++) != 0x1);
               int skipBytes = (int)(skip - nal.p_payload);

               int newPayloadSize = (nal.i_payload - skipBytes);

               uint8_t * payload = (uint8_t*)malloc(newPayloadSize);
               memcpy(payload, nal.p_payload + skipBytes, newPayloadSize);
               sect = { newPayloadSize, payload, nal.i_type };
               encodedImages->push(sect);

           } else if (nal.i_type == NAL_SLICE_IDR || nal.i_type == NAL_SLICE) {
               //these packets need an additional section at the start
               BYTE *skip = nal.p_payload;
               while (*(skip++) != 0x1);
               int skipBytes = (int)(skip - nal.p_payload);

               std::vector<byte> bodyData;
               if (!foundFrame) {
                   if (nal.i_type == NAL_SLICE_IDR) { bodyData.push_back(0x17); } else { bodyData.push_back(0x27); } //add a 17 or a 27 as appropriate
                   bodyData.push_back(1);
                   bodyData.push_back(*timeOffsetAddr);

                   foundFrame = true;
               }

               //put into the payload the bodyData followed by the nal payload
               uint8_t * bodyDataPayload = (uint8_t*)malloc(bodyData.size());
               memcpy(bodyDataPayload, bodyData.data(), bodyData.size() * sizeof(BYTE));

               int newPayloadSize = (nal.i_payload - skipBytes);

               uint8_t * payload = (uint8_t*)malloc(newPayloadSize + sizeof(bodyDataPayload));
               memcpy(payload, bodyDataPayload, sizeof(bodyDataPayload));
               memcpy(payload + sizeof(bodyDataPayload), nal.p_payload + skipBytes, newPayloadSize);
               int totalSize = newPayloadSize + sizeof(bodyDataPayload);
               sect = { totalSize, payload, nal.i_type };
               encodedImages->push(sect);
           } else {
               std::cout &lt;&lt; "VideoEncoder: Nal type did not match expected" &lt;&lt; std::endl;
               continue;
           }
       }
    </byte>

    The NAL payload data is then put into a struct, VideoSection, in a queue buffer

    //used to transfer encoded data
    struct videoSection {
       int frameSize;
       uint8_t* payload;
       int type;
    };

    After which it is picked up by the broadcaster, a few more changes are made, and then I call rtmp_send()

    videoSection sect = encodedImages->front();
    encodedImages->pop();

    //std::cout &lt;&lt; "Broadcaster: Frame Size: " &lt;&lt; sect.frameSize &lt;&lt; std::endl;

    //two methods of sending RTMP data, _sendpacket and _write. Using sendpacket for greater control

    RTMPPacket * packet;

    unsigned char* buf = (unsigned char*)sect.payload;

    int type = buf[0]&amp;0x1f; //I believe &amp;0x1f sets a 32bit limit
    int len = sect.frameSize;
    long timeOffset = GetTickCount() - rtmp_start_time;

    //assign space packet will need
    packet = (RTMPPacket *)malloc(sizeof(RTMPPacket)+RTMP_MAX_HEADER_SIZE + len + 9);
    memset(packet, 0, sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE);

    packet->m_body = (char *)packet + sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE;
    packet->m_nBodySize = len + 9;

    //std::cout &lt;&lt; "Broadcaster: Packet Size: " &lt;&lt; sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE + len + 9 &lt;&lt; std::endl;
    //std::cout &lt;&lt; "Broadcaster: Packet Body Size: " &lt;&lt; len + 9 &lt;&lt; std::endl;

    //set body to point to the packetbody
    unsigned char *body = (unsigned char *)packet->m_body;
    memset(body, 0, len + 9);



    //NAL_SLICE_IDR represents keyframe
    //first element determines packet type
    body[0] = 0x27;//inter-frame h.264
    if (sect.type == NAL_SLICE_IDR) {
       body[0] = 0x17; //h.264 codec id
    }


    //-------------------------------------------------------------------------------
    //this section taken from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
    //in an effort to understand packet format. it does not resolve my previous issues formatting the data for twitch to play it

    //sets body to be NAL unit
    body[1] = 0x01;
    body[2] = 0x00;
    body[3] = 0x00;
    body[4] = 0x00;

    //>> is a shift right
    //shift len to the right, and AND it
    /*body[5] = (len >> 24) &amp; 0xff;
    body[6] = (len >> 16) &amp; 0xff;
    body[7] = (len >> 8) &amp; 0xff;
    body[8] = (len) &amp; 0xff;*/

    //end code sourced from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
    //-------------------------------------------------------------------------------

    //copy from buffer into rest of body
    memcpy(&amp;body[9], buf, len);

    //DEBUG

    //save individual packet body to a file with name rtmp[packetnum]
    //determine why some packets do not have 0x27 or 0x17 at the start
    //still happening, makes no sense given the above code

    /*std::string fileLocation = "rtmp" + std::to_string(packCount++);
    std::cout &lt;&lt; fileLocation &lt;&lt; std::endl;
    const char * charConversion = fileLocation.c_str();

    FILE* saveFile = NULL;
    saveFile = fopen(charConversion, "w+b");//open as write and binary
    if (!fwrite(body, len + 9, 1, saveFile)) {
       std::cout &lt;&lt; "VideoEncoder: Error while trying to write to file" &lt;&lt; std::endl;
    }
    fclose(saveFile);*/

    //END DEBUG

    //other packet details
    packet->m_hasAbsTimestamp = 0;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    if (rtmp != NULL) {
       packet->m_nInfoField2 = rtmp->m_stream_id;
    }
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = timeOffset;

    //send the packet
    if (rtmp != NULL) {
       RTMP_SendPacket(rtmp, packet, TRUE);
    }

    I can see that Twitch is receiving the data in the inspector, at a steady 3kbps. so I’m sure something is wrong with how I’m adjusting the data before sending it. Can anyone advise me on what I’m doing wrong here ?

  • python ffmpeg moov atom not found Invalid data when processing input

    11 mai 2018, par Isocrates

    I have a progress that records the screen, and audio from a microphone, and then combines the video and audio recording (.mp4 and .wav) into one mkv file.

    I am using python 3.6 and ffmpeg to achieve this aim. For short videos (<20 sec.) it works, but for longer recordings it presents the following error message :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x55abb3a52540] moov atom not found
    tmp/tmp_0.mp4: Invalid data found when processing input

    Full output :

    ffmpeg version 3.3.7 Copyright (c) 2000-2018 the FFmpeg developers
    built with gcc 7 (GCC)
    configuration: --prefix=/usr --bindir=/usr/bin --
    datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --
    incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --
    arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,
    -D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-
    buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-
    hardened-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables' --extra-
    ldflags='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' --
    extra-cflags='-I/usr/include/nvenc ' --enable-libopencore-amrnb --
    enable-
    libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib
    --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --
    enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-
    libcdio --enable-indev=jack --enable-libfreetype --enable-libfribidi --
    enable-libgsm --enable-libmp3lame --enable-nvenc --enable-openal --enable-
    opencl --enable-opengl --enable-libopenjpeg --enable-libopus --enable-
    libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --
    enable-libtheora --enable-libvorbis --enable-libv4l2 --enable- libvidstab -
    -enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --
    enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --
    disable-static --enable-shared --enable-gpl --disable-debug --disable-
    stripping --shlibdir=/usr/lib64 --enable-libmfx --enable-runtime-cpudetect
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    [wav @ 0x55abb3a0b880] Ignoring maximum wav data size, file may be invalid
    [wav @ 0x55abb3a0b880] Estimating duration from bitrate, this may be
    inaccurate
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, wav, from 'tmp/tmp_0.wav':
     Metadata:
       encoder         : Lavf57.71.100
     Duration: 00:00:21.97, bitrate: 768 kb/s
    Stream #0:0: Audio: pcm_mulaw ([7][0][0][0] / 0x0007), 48000 Hz,
    stereo, s16, 768 kb/s
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x55abb3a52540] moov atom not found
    tmp/tmp_0.mp4: Invalid data found when processing input

    The python file (ffmpeg.py) is as follows. The class, AV_COMPILE, is not yet complete, held up by the aforementioned error, and therefore still uses the initial test files as defaults. But otherwise it ought to work :

    import os, time, glob

    TMP_DIR = "tmp"
    DISPLAY = os.environ['DISPLAY']
    EXT = {
       'Video':'mp4',
       'Audio':'wav',
       'AV':'mkv',
    }

    class ffmpegVideo:

       FFMPEG_BIN = "ffmpeg"
       AUDIO = False

       def __init__(self, fps = 30, audio = True):
       global TMP_DIR, DISPLAY, EXT

       self.fps = fps

       if audio:
           self.AUDIO = True

       self.video_filename = self.unique_filename()

       self.command = [ self.FFMPEG_BIN,
           '-video_size', '1920x1080',
           '-framerate', str(fps),
           '-f', 'x11grab',
           '-i', DISPLAY,
           '-vcodec', 'libx264',
           '-qp', '0',
           '-preset', 'ultrafast',
           '-y', TMP_DIR + '/' + self.video_filename
       ]

    def start(self):
       import threading as th

       thread = th.Thread(target=self.record)
       thread.start()

    def record(self):
       import subprocess as sp

       self.pipe = sp.Popen(self.command, stderr=sp.PIPE)

       if self.AUDIO:
           ffmpegAudio().start()

    def stop(self):
       self.pipe.terminate()

    def unique_filename(self):
       global TMP_DIR, EXT

       i = 0

       while os.path.exists((TMP_DIR + '/' + 'tmp_%s.%s') % (i, EXT['Video'])):
           i += 1

       return ('tmp_%s.%s') % (i, EXT['Video'])

    class ffmpegAudio:

       FFMPEG_BIN = "ffmpeg"

       def __init__(self):

           self.audio_filename = self.unique_filename()

           self.command = [ self.FFMPEG_BIN,
               '-f', 'pulse',
               '-ac', '2',
               '-ar', '48000',
               '-i', 'default',
              '-acodec', 'pcm_mulaw',
              '-y', TMP_DIR + '/' + self.audio_filename
           ]

       def start(self):
           import threading as th

           au_thread = th.Thread(target=self.record)
           au_thread.start()

       def record(self):
            import subprocess as sp

           self.pipe = sp.Popen(self.command, stderr=sp.PIPE)

       def stop(self):
           self.pipe.terminate()

       def unique_filename(self):
           global TMP_DIR, EXT

           i = 0

           while os.path.exists((TMP_DIR + '/' + 'tmp_%s.%s') % (i, EXT['Audio'])):
           i += 1

           return ('tmp_%s.%s') % (i, EXT['Audio'])

    class AV_COMPILE:

       def __init__(self, au_in = TMP_DIR + '/' + 'out1.wav', vd_in =
    TMP_DIR + '/' + 'test4.mp4', out = TMP_DIR + '/' + 'av.mkv'):
           import subprocess as sp

           au_in = min(glob.iglob(TMP_DIR + '/*.wav'), key=os.path.getctime)
           vd_in = min(glob.iglob(TMP_DIR + '/*.mp4'), key=os.path.getctime)

           self.command = ('ffmpeg -i %s  -r 30 -i %s -shortest -c:a aac -c:v copy %s') % (au_in, vd_in, out)
           sp.call(self.command, shell=True)

    I would be grateful for any assistance you could provide in understanding why this happens and how to solve the error. Also, I am happy to receive any other tips on how to improve this code, or any other problems anyone might notice.

    EDIT :
    I now believe that the reason for this error in longer videos, and occasionally shorter, is that the program is proceeding to attempt to compile the av output whether or not it has finished compiling the original video file. I tested a time.sleep(10) function to delay AV_COMPILE, and this seems to work.

    However, as video files get larger, obviously the delay needs to be adjusted. So I should like to know how I can separately check the integrity of the video file and determine that it is safe to proceed to the next step.