Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (9073)

  • How to get video frame for a specific time from mp4

    11 décembre 2015, par man-r

    i have an mp4 video byte array and i need to generate a thumbnail for it using its middle frame (e.g. if the video length is 10 seconds then i need to get the picture from 5th second).

    i managed to parse through the file and extract its boxes (atom). i have also managed to get the video length from the mvhd box. also i managed to extract
    1. the time-To-Sample table from stts box,
    2. the sample-To-Chunk table from stcs box,
    3. the chunk-Offset table from stco box,
    4. the sample Size table from stsz box,
    5. the Sync Sample table from stss box

    i know that all the actual media are available in the mdat box and that i need to correlate the above table to find the exact frame offset in the file but my question is how ? the tables data seems to be compressed (specially the time-To-Sample table) but i don’t know how decompress them.

    any help is appreciated.

    below are code samples

    code to convert byte to hex

    public static char[] bytesToHex(byte[] bytes) {
       char[] hexChars = new char[bytes.length * 2];
       for ( int j = 0; j < bytes.length; j++ ) {
           int v = bytes[j] & 0xFF;

           hexChars[j * 2] = hexArray[v >>> 4];
           hexChars[j * 2 + 1] = hexArray[v & 0x0F];            
       }
       return hexChars;
    }

    code for getting the box offset

    final static String MOOV                          = "6D6F6F76";
    final static String MOOV_MVHD                     = "6D766864";
    final static String MOOV_TRAK                     = "7472616B";
    final static String MOOV_TRAK_MDIA                = "6D646961";
    final static String MOOV_TRAK_MDIA_MINF           = "6D696E66";
    final static String MOOV_TRAK_MDIA_MINF_STBL      = "7374626C";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSD = "73747364";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STTS = "73747473";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSS = "73747373";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSC = "73747363";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STCO = "7374636F";
    final static String MOOV_TRAK_MDIA_MINF_STBL_STSZ = "7374737A";

    static int getBox(char[] s, int offset, String type) {
       int typeOffset = -1;
       for (int i = offset*2; i-1) {
                   break;
               }
           }
           i+=(size*2);
       }

       return typeOffset;
    }

    code for getting the duration and timescale

    static int[] getDuration(char[] s) {
       int mvhdOffset = getBox(s, 0, MOOV_MVHD);
       int timeScaleStart = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4)*2;
       int timeScaleEnd   = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;

       int durationStart  = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;
       int durationEnd    = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4 + 4)*2;

       String timeScaleHex = new String(Arrays.copyOfRange(s, timeScaleStart, timeScaleEnd));
       String durationHex = new String(Arrays.copyOfRange(s, durationStart, durationEnd));

       int timeScale = Integer.parseInt(timeScaleHex, 16);
       int duration = Integer.parseInt(durationHex, 16);

       int[] result = {duration, timeScale};
       return result;
    }

    code to get the time-To-Sample table

    static int[][] getTimeToSampleTable(char[] s, int trakOffset) {
       int offset = getBox(s, trakOffset, MOOV_TRAK_MDIA_MINF_STBL_STTS);
       int sizeStart = offset*2;
       int sizeEnd   = offset*2 + (4)*2;

       int typeStart = offset*2 + (4)*2;
       int typeEnd   = offset*2 + (4 + 4)*2;

       int noOfEntriesStart = offset*2 + (4 + 4 + 1 + 3)*2;
       int noOfEntriesEnd   = offset*2 + (4 + 4 + 1 + 3 + 4)*2;

       String sizeHex = new String(Arrays.copyOfRange(s, sizeStart, sizeEnd));
       String typeHex = new String(Arrays.copyOfRange(s, typeStart, typeEnd));
       String noOfEntriesHex = new String(Arrays.copyOfRange(s, noOfEntriesStart, noOfEntriesEnd));

       int size = Integer.parseInt(sizeHex, 16);
       int noOfEntries = Integer.parseInt(noOfEntriesHex, 16);

       int[][] timeToSampleTable = new int[noOfEntries][2];

       for (int i = 0; icode>
  • ffmpeg issues "501 Not Implemented" while recording an RTSP stream

    28 février 2019, par atsushi

    I have a 4K camera (Sony SNC-VB770) streaming RTSP.

    I’m trying to record the stream into files (each has handy length, say, 1hour)
    using a simple script to repeatedly launch ffmpeg (ver 4.1) :

    while : ; do
     # (set $url and $outfile, and then)
     ffmpeg -rtsp_transport tcp -t 3600 -y -i $url -c copy -map 0:0 -b:v 16000k $outfile
    done

    If I run the script on a local PC directly connected to the camera, it works (longer than a week, at least).
    However, if I do the same on a server machine located in a data center, it fails randomly with no error message.
    Sometimes it runs for a few days, sometimes it dies in one minutes.

    Typical output looks like the following :

    # devname: snc-vb770
    # url: rtsp://10.40.35.90/media/video1
    # vb: 16000k
    # datefmt %d%H
    # addtimestamp 0
    no timestamp
    ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-18)
     configuration: --prefix=/usr/local/ffmpeg-4.1 --enable-openssl --enable-gpl --enable-version3 --enable-nonfree --enable-shared --enable-libx264 --enable-libvorbis --enable-filter=drawtext --enable-libfreetype --enable-libfribidi --enable-fontconfig
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Input #0, rtsp, from 'rtsp://10.40.35.90/media/video1':
     Metadata:
       title           : Sony RTSP Server
     Duration: N/A, start: 0.066667, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 3840x2160 [SAR 1:1 DAR 16:9], 14.99 fps, 14.99 tbr, 90k tbn, 29.97 tbc
    Output #0, mp4, to './2811.mp4':
     Metadata:
       title           : Sony RTSP Server
       encoder         : Lavf58.20.100
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 3840x2160 [SAR 1:1 DAR 16:9], q=2-31, 16000 kb/s, 14.99 fps, 14.99 tbr, 90k tbn, 90k tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [mp4 @ 0x24e4ec0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
    [mp4 @ 0x24e4ec0] pts has no value
    [mp4 @ 0x24e4ec0] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file.
    frame=   33 fps=0.0 q=-1.0 size=    1792kB time=00:00:02.00 bitrate=7332.9kbits/s speed=3.57x
    ...
    frame=  104 fps=8.4 q=-1.0 Lsize=    6532kB time=00:00:06.74 bitrate=7939.6kbits/s speed=0.548x
    video:6531kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.023506%

    I’ve looked into RTSP packet and found an "RTSP/1.0 501 Not Implemented" is sent from ffmpeg to the camera.
    After that the camera eventually sent back "RTSP/1.0 505 RTSP Version not supported" and then ffmpeg quits shortly.

    The "501" packet seems to be generated by libavformat/rtsp.c:ff_rtsp_read_reply(),
    when ffmpeg receive a malformed RTSP packet with method=(null), status_code=0.
    I don’t know why such packets arrive at random timing and who is wrong (maybe the camera, maybe any of network switches or routers in the middle of the network path from the camera to the server machine).
    But anyway, I don’t want the recording to be stopped
    due to those malformed packets.

    Is there any workaround to make ffmpeg ignore invalid RTSP packets and just continue the recording ?

    Additional information :

    • I’ve tested the recording with both ffmpeg ver4.1 and 2.8.4 and no difference observed.

    • No difference observed at lower resolution nor at lower bitrate.

    • I have 3 cameras from various manufacturers in the same network environment.
      All of the three are working without problem for more than a month.
      Only the Sony SNC-VB770 shows the strange behavior.

  • Merge Conference Video and Audio call output using hstack ffmpeg

    3 janvier 2020, par venkat

    having two videos and two audio files

    Input #0, matroska,webm, from 'first.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:15
     Duration: 00:06:01.24, start: 3.817000, bitrate: 1547 kb/s
       Stream #0:0(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 16.75 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video
    Input #1, matroska,webm, from 'second.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:24
     Duration: 00:05:49.79, start: 13.509000, bitrate: 810 kb/s
       Stream #1:0(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video
    Input #2, matroska,webm, from 'first.mka':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:15
     Duration: 00:06:01.30, start: 3.786000, bitrate: 46 kb/s
       Stream #3:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         title           : Audio
    Input #3, matroska,webm, from 'second.mka':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:24
     Duration: 00:05:50.61, start: 13.498000, bitrate: 50 kb/s
       Stream #2:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         title           : Audio

    above files are output of video conference call, want to merge all files together and show as side by side video.

    start time of video and audio are different, want to sync the video and audio respectively and merge the video side by side.

    Initially used the following command to merge two videos

    ffmpeg -i first.mkv -i second.mkv -filter_complex "
    [0:v]scale=320:240,pad=325:240,setsar=1[l];[1:v]scale=320:240,setsar=1[r];
    [l][r]hstack" -c:v libx264 -preset ultrafast -crf 0 merged.mp4

    After that use the following command to merge as suggested by @mulvya

    ffmpeg -ss 00:00:09.692 -i first.mkv -i second.mkv -i first.mka -i second.mka -filter_complex "[0:v]scale=320:240,pad=325:240,setsar=1[l];[1:v]scale=320:240,setsar=1[r];[l][r]hstack=shortest=1[v];[3]adelay=9712|9712[3a];[2][3a]amerge[a]" -map '[v]' -map '[a]' -c:v libx264 -preset slower -crf 0 -c:a aac -ac 2 merged.mp4

    for the -ss value taken the difference in video start time and adelay value taken the difference in audio start time

    Sample test conference files

    1. https://drive.google.com/open?id=0ByVMq5U43FGlbXpXR3JtSnFTaWM

    2. https://drive.google.com/open?id=0ByVMq5U43FGlbENVRWlTWktQb3M

    3. https://drive.google.com/open?id=0ByVMq5U43FGldndlZDNpNWxWY2M

    4. https://drive.google.com/open?id=0ByVMq5U43FGlei1oRjNKeXRZbE0

    now facing audio sync issues and second audio hearing low.

    Expected result is first video and second video merged side by side and audio should sync with merged video.

    Now I can able to get desired output using the below command

    ffmpeg -i first.mkv -i second.mkv -i first.mka -i second.mka -filter_complex "[0]scale=320:240,pad=645:240,setsar=1[l];[1]scale=320:240,setpts=PTS-STARTPTS+9.723/TB,setsar=1[1v];[l][1v]overlay=x=325[v];[3]adelay=9712|9712[1a];[2]adelay=31|31[2a];[2a][1a]amerge=inputs=2[a]" -map '[v]' -map '[a]' -c:v libx264 -preset slower -crf 0 -c:a aac -ac 2 merged.mp4

    but again facing following issues

    1. Second Video not encoded properly stuck in middle and playing.
    2. Audio Sync issues.
    3. Conversion process is slow. how can be above work done using hstack ?.

    any suggestions or help ?